blob: 5e675b4595a18e64f9915fb03ac73b07d0f40b96 [file] [log] [blame]
Ian Rogers99908912012-08-17 17:28:15 -07001/*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
6
Ian Rogersc6d95ad2012-08-29 14:04:53 -07007* Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
Ian Rogers99908912012-08-17 17:28:15 -07008 Note: There may be an updated version of this malloc obtainable at
9 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10 Check before installing!
11
12* Quickstart
13
14 This library is all in one file to simplify the most common usage:
15 ftp it, compile it (-O3), and link it into another program. All of
16 the compile-time options default to reasonable values for use on
17 most platforms. You might later want to step through various
18 compile-time and dynamic tuning options.
19
20 For convenience, an include file for code using this malloc is at:
Ian Rogersc6d95ad2012-08-29 14:04:53 -070021 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
Ian Rogers99908912012-08-17 17:28:15 -070022 You don't really need this .h file unless you call functions not
23 defined in your system include files. The .h file contains only the
24 excerpts from this file needed for using this malloc on ANSI C/C++
25 systems, so long as you haven't changed compile-time options about
26 naming and tuning parameters. If you do, then you can create your
27 own malloc.h that does include all settings by cutting at the point
28 indicated below. Note that you may already by default be using a C
29 library containing a malloc that is based on some version of this
30 malloc (for example in linux). You might still want to use the one
31 in this file to customize settings or to avoid overheads associated
32 with library versions.
33
34* Vital statistics:
35
36 Supported pointer/size_t representation: 4 or 8 bytes
37 size_t MUST be an unsigned type of the same width as
38 pointers. (If you are using an ancient system that declares
39 size_t as a signed type, or need it to be a different width
40 than pointers, you can use a previous release of this malloc
41 (e.g. 2.7.2) supporting these.)
42
Ian Rogersc6d95ad2012-08-29 14:04:53 -070043 Alignment: 8 bytes (minimum)
Ian Rogers99908912012-08-17 17:28:15 -070044 This suffices for nearly all current machines and C compilers.
45 However, you can define MALLOC_ALIGNMENT to be wider than this
46 if necessary (up to 128bytes), at the expense of using more space.
47
48 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
49 8 or 16 bytes (if 8byte sizes)
50 Each malloced chunk has a hidden word of overhead holding size
51 and status information, and additional cross-check word
52 if FOOTERS is defined.
53
54 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
55 8-byte ptrs: 32 bytes (including overhead)
56
57 Even a request for zero bytes (i.e., malloc(0)) returns a
58 pointer to something of the minimum allocatable size.
59 The maximum overhead wastage (i.e., number of extra bytes
60 allocated than were requested in malloc) is less than or equal
61 to the minimum size, except for requests >= mmap_threshold that
62 are serviced via mmap(), where the worst case wastage is about
63 32 bytes plus the remainder from a system page (the minimal
64 mmap unit); typically 4096 or 8192 bytes.
65
66 Security: static-safe; optionally more or less
67 The "security" of malloc refers to the ability of malicious
68 code to accentuate the effects of errors (for example, freeing
69 space that is not currently malloc'ed or overwriting past the
70 ends of chunks) in code that calls malloc. This malloc
71 guarantees not to modify any memory locations below the base of
72 heap, i.e., static variables, even in the presence of usage
73 errors. The routines additionally detect most improper frees
74 and reallocs. All this holds as long as the static bookkeeping
75 for malloc itself is not corrupted by some other means. This
76 is only one aspect of security -- these checks do not, and
77 cannot, detect all possible programming errors.
78
79 If FOOTERS is defined nonzero, then each allocated chunk
80 carries an additional check word to verify that it was malloced
81 from its space. These check words are the same within each
82 execution of a program using malloc, but differ across
83 executions, so externally crafted fake chunks cannot be
84 freed. This improves security by rejecting frees/reallocs that
85 could corrupt heap memory, in addition to the checks preventing
86 writes to statics that are always on. This may further improve
87 security at the expense of time and space overhead. (Note that
88 FOOTERS may also be worth using with MSPACES.)
89
90 By default detected errors cause the program to abort (calling
91 "abort()"). You can override this to instead proceed past
92 errors by defining PROCEED_ON_ERROR. In this case, a bad free
93 has no effect, and a malloc that encounters a bad address
94 caused by user overwrites will ignore the bad address by
95 dropping pointers and indices to all known memory. This may
96 be appropriate for programs that should continue if at all
97 possible in the face of programming errors, although they may
98 run out of memory because dropped memory is never reclaimed.
99
100 If you don't like either of these options, you can define
101 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102 else. And if if you are sure that your program using malloc has
103 no errors or vulnerabilities, you can define INSECURE to 1,
104 which might (or might not) provide a small performance improvement.
105
106 It is also possible to limit the maximum total allocatable
107 space, using malloc_set_footprint_limit. This is not
108 designed as a security feature in itself (calls to set limits
109 are not screened or privileged), but may be useful as one
110 aspect of a secure implementation.
111
112 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113 When USE_LOCKS is defined, each public call to malloc, free,
114 etc is surrounded with a lock. By default, this uses a plain
115 pthread mutex, win32 critical section, or a spin-lock if if
116 available for the platform and not disabled by setting
117 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
118 recursive versions are used instead (which are not required for
119 base functionality but may be needed in layered extensions).
120 Using a global lock is not especially fast, and can be a major
121 bottleneck. It is designed only to provide minimal protection
122 in concurrent environments, and to provide a basis for
123 extensions. If you are using malloc in a concurrent program,
124 consider instead using nedmalloc
125 (http://www.nedprod.com/programs/portable/nedmalloc/) or
126 ptmalloc (See http://www.malloc.de), which are derived from
127 versions of this malloc.
128
129 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130 This malloc can use unix sbrk or any emulation (invoked using
131 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133 memory. On most unix systems, it tends to work best if both
134 MORECORE and MMAP are enabled. On Win32, it uses emulations
135 based on VirtualAlloc. It also uses common C library functions
136 like memset.
137
138 Compliance: I believe it is compliant with the Single Unix Specification
139 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
140 others as well.
141
142* Overview of algorithms
143
144 This is not the fastest, most space-conserving, most portable, or
145 most tunable malloc ever written. However it is among the fastest
146 while also being among the most space-conserving, portable and
147 tunable. Consistent balance across these factors results in a good
148 general-purpose allocator for malloc-intensive programs.
149
150 In most ways, this malloc is a best-fit allocator. Generally, it
151 chooses the best-fitting existing chunk for a request, with ties
152 broken in approximately least-recently-used order. (This strategy
153 normally maintains low fragmentation.) However, for requests less
154 than 256bytes, it deviates from best-fit when there is not an
155 exactly fitting available chunk by preferring to use space adjacent
156 to that used for the previous small request, as well as by breaking
157 ties in approximately most-recently-used order. (These enhance
158 locality of series of small allocations.) And for very large requests
159 (>= 256Kb by default), it relies on system memory mapping
160 facilities, if supported. (This helps avoid carrying around and
161 possibly fragmenting memory used only for large chunks.)
162
163 All operations (except malloc_stats and mallinfo) have execution
164 times that are bounded by a constant factor of the number of bits in
165 a size_t, not counting any clearing in calloc or copying in realloc,
166 or actions surrounding MORECORE and MMAP that have times
167 proportional to the number of non-contiguous regions returned by
168 system allocation routines, which is often just 1. In real-time
169 applications, you can optionally suppress segment traversals using
170 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171 system allocators return non-contiguous spaces, at the typical
172 expense of carrying around more memory and increased fragmentation.
173
174 The implementation is not very modular and seriously overuses
175 macros. Perhaps someday all C compilers will do as good a job
176 inlining modular code as can now be done by brute-force expansion,
177 but now, enough of them seem not to.
178
179 Some compilers issue a lot of warnings about code that is
180 dead/unreachable only on some platforms, and also about intentional
181 uses of negation on unsigned types. All known cases of each can be
182 ignored.
183
184 For a longer but out of date high-level description, see
185 http://gee.cs.oswego.edu/dl/html/malloc.html
186
187* MSPACES
188 If MSPACES is defined, then in addition to malloc, free, etc.,
189 this file also defines mspace_malloc, mspace_free, etc. These
190 are versions of malloc routines that take an "mspace" argument
191 obtained using create_mspace, to control all internal bookkeeping.
192 If ONLY_MSPACES is defined, only these versions are compiled.
193 So if you would like to use this allocator for only some allocations,
194 and your system malloc for others, you can compile with
195 ONLY_MSPACES and then do something like...
196 static mspace mymspace = create_mspace(0,0); // for example
197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
198
199 (Note: If you only need one instance of an mspace, you can instead
200 use "USE_DL_PREFIX" to relabel the global malloc.)
201
202 You can similarly create thread-local allocators by storing
203 mspaces as thread-locals. For example:
204 static __thread mspace tlms = 0;
205 void* tlmalloc(size_t bytes) {
206 if (tlms == 0) tlms = create_mspace(0, 0);
207 return mspace_malloc(tlms, bytes);
208 }
209 void tlfree(void* mem) { mspace_free(tlms, mem); }
210
211 Unless FOOTERS is defined, each mspace is completely independent.
212 You cannot allocate from one and free to another (although
213 conformance is only weakly checked, so usage errors are not always
214 caught). If FOOTERS is defined, then each chunk carries around a tag
215 indicating its originating mspace, and frees are directed to their
216 originating spaces. Normally, this requires use of locks.
217
218 ------------------------- Compile-time options ---------------------------
219
220Be careful in setting #define values for numerical constants of type
221size_t. On some systems, literal values are not automatically extended
222to size_t precision unless they are explicitly casted. You can also
223use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
224
225WIN32 default: defined if _WIN32 defined
226 Defining WIN32 sets up defaults for MS environment and compilers.
227 Otherwise defaults are for unix. Beware that there seem to be some
228 cases where this malloc might not be a pure drop-in replacement for
229 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230 SetDIBits()) may be due to bugs in some video driver implementations
231 when pixel buffers are malloc()ed, and the region spans more than
232 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233 default granularity, pixel buffers may straddle virtual allocation
234 regions more often than when using the Microsoft allocator. You can
235 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236 buffers rather than using malloc(). If this is not possible,
237 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239 conditions use _MSC_VER to distinguish them.
240
241DLMALLOC_EXPORT default: extern
242 Defines how public APIs are declared. If you want to export via a
243 Windows DLL, you might define this as
Ian Rogersc6d95ad2012-08-29 14:04:53 -0700244 #define DLMALLOC_EXPORT extern __declspec(dllexport)
Ian Rogers99908912012-08-17 17:28:15 -0700245 If you want a POSIX ELF shared object, you might use
246 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
247
Ian Rogersc6d95ad2012-08-29 14:04:53 -0700248MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
Ian Rogers99908912012-08-17 17:28:15 -0700249 Controls the minimum alignment for malloc'ed chunks. It must be a
250 power of two and at least 8, even on machines for which smaller
251 alignments would suffice. It may be defined as larger than this
252 though. Note however that code and data structures are optimized for
253 the case of 8-byte alignment.
254
255MSPACES default: 0 (false)
256 If true, compile in support for independent allocation spaces.
257 This is only supported if HAVE_MMAP is true.
258
259ONLY_MSPACES default: 0 (false)
260 If true, only compile in mspace versions, not regular versions.
261
262USE_LOCKS default: 0 (false)
263 Causes each call to each public routine to be surrounded with
264 pthread or WIN32 mutex lock/unlock. (If set true, this can be
265 overridden on a per-mspace basis for mspace versions.) If set to a
266 non-zero value other than 1, locks are used, but their
267 implementation is left out, so lock functions must be supplied manually,
268 as described below.
269
270USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
271 If true, uses custom spin locks for locking. This is currently
272 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273 MS compilers. Otherwise, posix locks or win32 critical sections are
274 used.
275
276USE_RECURSIVE_LOCKS default: not defined
277 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278 uses plain mutexes. This is not required for malloc proper, but may
279 be needed for layered allocators such as nedmalloc.
280
Ian Rogersc6d95ad2012-08-29 14:04:53 -0700281LOCK_AT_FORK default: not defined
282 If defined nonzero, performs pthread_atfork upon initialization
283 to initialize child lock while holding parent lock. The implementation
284 assumes that pthread locks (not custom locks) are being used. In other
285 cases, you may need to customize the implementation.
286
Ian Rogers99908912012-08-17 17:28:15 -0700287FOOTERS default: 0
288 If true, provide extra checking and dispatching by placing
289 information in the footers of allocated chunks. This adds
290 space and time overhead.
291
292INSECURE default: 0
293 If true, omit checks for usage errors and heap space overwrites.
294
295USE_DL_PREFIX default: NOT defined
296 Causes compiler to prefix all public routines with the string 'dl'.
297 This can be useful when you only want to use this malloc in one part
298 of a program, using your regular system malloc elsewhere.
299
300MALLOC_INSPECT_ALL default: NOT defined
301 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302 perform traversal of all heap space. Unless access to these
303 functions is otherwise restricted, you probably do not want to
304 include them in secure implementations.
305
306ABORT default: defined as abort()
307 Defines how to abort on failed checks. On most systems, a failed
308 check cannot die with an "assert" or even print an informative
309 message, because the underlying print routines in turn call malloc,
310 which will fail again. Generally, the best policy is to simply call
311 abort(). It's not very useful to do more than this because many
312 errors due to overwriting will show up as address faults (null, odd
313 addresses etc) rather than malloc-triggered checks, so will also
314 abort. Also, most compilers know that abort() does not return, so
315 can better optimize code conditionally calling it.
316
317PROCEED_ON_ERROR default: defined as 0 (false)
318 Controls whether detected bad addresses cause them to bypassed
319 rather than aborting. If set, detected bad arguments to free and
320 realloc are ignored. And all bookkeeping information is zeroed out
321 upon a detected overwrite of freed heap space, thus losing the
322 ability to ever return it from malloc again, but enabling the
323 application to proceed. If PROCEED_ON_ERROR is defined, the
324 static variable malloc_corruption_error_count is compiled in
325 and can be examined to see if errors have occurred. This option
326 generates slower code than the default abort policy.
327
328DEBUG default: NOT defined
329 The DEBUG setting is mainly intended for people trying to modify
330 this code or diagnose problems when porting to new platforms.
331 However, it may also be able to better isolate user errors than just
332 using runtime checks. The assertions in the check routines spell
333 out in more detail the assumptions and invariants underlying the
334 algorithms. The checking is fairly extensive, and will slow down
335 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336 set will attempt to check every non-mmapped allocated and free chunk
337 in the course of computing the summaries.
338
339ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
340 Debugging assertion failures can be nearly impossible if your
341 version of the assert macro causes malloc to be called, which will
342 lead to a cascade of further failures, blowing the runtime stack.
343 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344 which will usually make debugging easier.
345
346MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
347 The action to take before "return 0" when malloc fails to be able to
348 return memory because there is none available.
349
350HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
351 True if this system supports sbrk or an emulation of it.
352
353MORECORE default: sbrk
354 The name of the sbrk-style system routine to call to obtain more
355 memory. See below for guidance on writing custom MORECORE
356 functions. The type of the argument to sbrk/MORECORE varies across
357 systems. It cannot be size_t, because it supports negative
358 arguments, so it is normally the signed type of the same width as
359 size_t (sometimes declared as "intptr_t"). It doesn't much matter
360 though. Internally, we only call it with arguments less than half
361 the max value of a size_t, which should work across all reasonable
362 possibilities, although sometimes generating compiler warnings.
363
364MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
365 If true, take advantage of fact that consecutive calls to MORECORE
366 with positive arguments always return contiguous increasing
367 addresses. This is true of unix sbrk. It does not hurt too much to
368 set it true anyway, since malloc copes with non-contiguities.
369 Setting it false when definitely non-contiguous saves time
370 and possibly wasted space it would take to discover this though.
371
372MORECORE_CANNOT_TRIM default: NOT defined
373 True if MORECORE cannot release space back to the system when given
374 negative arguments. This is generally necessary only if you are
375 using a hand-crafted MORECORE function that cannot handle negative
376 arguments.
377
378NO_SEGMENT_TRAVERSAL default: 0
379 If non-zero, suppresses traversals of memory segments
380 returned by either MORECORE or CALL_MMAP. This disables
381 merging of segments that are contiguous, and selectively
382 releasing them to the OS if unused, but bounds execution times.
383
384HAVE_MMAP default: 1 (true)
385 True if this system supports mmap or an emulation of it. If so, and
386 HAVE_MORECORE is not true, MMAP is used for all system
387 allocation. If set and HAVE_MORECORE is true as well, MMAP is
388 primarily used to directly allocate very large blocks. It is also
389 used as a backup strategy in cases where MORECORE fails to provide
390 space from system. Note: A single call to MUNMAP is assumed to be
391 able to unmap memory that may have be allocated using multiple calls
392 to MMAP, so long as they are adjacent.
393
394HAVE_MREMAP default: 1 on linux, else 0
395 If true realloc() uses mremap() to re-allocate large blocks and
396 extend or shrink allocation spaces.
397
398MMAP_CLEARS default: 1 except on WINCE.
399 True if mmap clears memory so calloc doesn't need to. This is true
400 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
401
402USE_BUILTIN_FFS default: 0 (i.e., not used)
403 Causes malloc to use the builtin ffs() function to compute indices.
404 Some compilers may recognize and intrinsify ffs to be faster than the
405 supplied C version. Also, the case of x86 using gcc is special-cased
406 to an asm instruction, so is already as fast as it can be, and so
407 this setting has no effect. Similarly for Win32 under recent MS compilers.
408 (On most x86s, the asm version is only slightly faster than the C version.)
409
410malloc_getpagesize default: derive from system includes, or 4096.
411 The system page size. To the extent possible, this malloc manages
412 memory from the system in page-size units. This may be (and
413 usually is) a function rather than a constant. This is ignored
414 if WIN32, where page size is determined using getSystemInfo during
415 initialization.
416
417USE_DEV_RANDOM default: 0 (i.e., not used)
418 Causes malloc to use /dev/random to initialize secure magic seed for
419 stamping footers. Otherwise, the current time is used.
420
421NO_MALLINFO default: 0
422 If defined, don't compile "mallinfo". This can be a simple way
423 of dealing with mismatches between system declarations and
424 those in this file.
425
426MALLINFO_FIELD_TYPE default: size_t
427 The type of the fields in the mallinfo struct. This was originally
428 defined as "int" in SVID etc, but is more usefully defined as
429 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
430
431NO_MALLOC_STATS default: 0
432 If defined, don't compile "malloc_stats". This avoids calls to
433 fprintf and bringing in stdio dependencies you might not want.
434
435REALLOC_ZERO_BYTES_FREES default: not defined
436 This should be set if a call to realloc with zero bytes should
437 be the same as a call to free. Some people think it should. Otherwise,
438 since this malloc returns a unique pointer for malloc(0), so does
439 realloc(p, 0).
440
441LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
443LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
444 Define these if your system does not have these header files.
445 You might need to manually insert some of the declarations they provide.
446
447DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
448 system_info.dwAllocationGranularity in WIN32,
449 otherwise 64K.
450 Also settable using mallopt(M_GRANULARITY, x)
451 The unit for allocating and deallocating memory from the system. On
452 most systems with contiguous MORECORE, there is no reason to
453 make this more than a page. However, systems with MMAP tend to
454 either require or encourage larger granularities. You can increase
455 this value to prevent system allocation functions to be called so
456 often, especially if they are slow. The value must be at least one
457 page and must be a power of two. Setting to 0 causes initialization
458 to either page size or win32 region size. (Note: In previous
459 versions of malloc, the equivalent of this option was called
460 "TOP_PAD")
461
462DEFAULT_TRIM_THRESHOLD default: 2MB
463 Also settable using mallopt(M_TRIM_THRESHOLD, x)
464 The maximum amount of unused top-most memory to keep before
465 releasing via malloc_trim in free(). Automatic trimming is mainly
466 useful in long-lived programs using contiguous MORECORE. Because
467 trimming via sbrk can be slow on some systems, and can sometimes be
468 wasteful (in cases where programs immediately afterward allocate
469 more large chunks) the value should be high enough so that your
470 overall system performance would improve by releasing this much
471 memory. As a rough guide, you might set to a value close to the
472 average size of a process (program) running on your system.
473 Releasing this much memory would allow such a process to run in
474 memory. Generally, it is worth tuning trim thresholds when a
475 program undergoes phases where several large chunks are allocated
476 and released in ways that can reuse each other's storage, perhaps
477 mixed with phases where there are no such chunks at all. The trim
478 value must be greater than page size to have any useful effect. To
479 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480 some people use of mallocing a huge space and then freeing it at
481 program startup, in an attempt to reserve system memory, doesn't
482 have the intended effect under automatic trimming, since that memory
483 will immediately be returned to the system.
484
485DEFAULT_MMAP_THRESHOLD default: 256K
486 Also settable using mallopt(M_MMAP_THRESHOLD, x)
487 The request size threshold for using MMAP to directly service a
488 request. Requests of at least this size that cannot be allocated
489 using already-existing space will be serviced via mmap. (If enough
490 normal freed space already exists it is used instead.) Using mmap
491 segregates relatively large chunks of memory so that they can be
492 individually obtained and released from the host system. A request
493 serviced through mmap is never reused by any other request (at least
494 not directly; the system may just so happen to remap successive
495 requests to the same locations). Segregating space in this way has
496 the benefits that: Mmapped space can always be individually released
497 back to the system, which helps keep the system level memory demands
498 of a long-lived program low. Also, mapped memory doesn't become
499 `locked' between other chunks, as can happen with normally allocated
500 chunks, which means that even trimming via malloc_trim would not
501 release them. However, it has the disadvantage that the space
502 cannot be reclaimed, consolidated, and then used to service later
503 requests, as happens with normal chunks. The advantages of mmap
504 nearly always outweigh disadvantages for "large" chunks, but the
505 value of "large" may vary across systems. The default is an
506 empirically derived value that works well in most systems. You can
507 disable mmap by setting to MAX_SIZE_T.
508
509MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
510 The number of consolidated frees between checks to release
511 unused segments when freeing. When using non-contiguous segments,
512 especially with multiple mspaces, checking only for topmost space
513 doesn't always suffice to trigger trimming. To compensate for this,
514 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515 current number of segments, if greater) try to release unused
516 segments to the OS when freeing chunks that result in
517 consolidation. The best value for this parameter is a compromise
518 between slowing down frees with relatively costly checks that
519 rarely trigger versus holding on to unused memory. To effectively
520 disable, set to MAX_SIZE_T. This may lead to a very slight speed
521 improvement at the expense of carrying around more memory.
522*/
523
524/* Version identifier to allow people to support multiple versions */
525#ifndef DLMALLOC_VERSION
Ian Rogersc6d95ad2012-08-29 14:04:53 -0700526#define DLMALLOC_VERSION 20806
Ian Rogers99908912012-08-17 17:28:15 -0700527#endif /* DLMALLOC_VERSION */
528
529#ifndef DLMALLOC_EXPORT
530#define DLMALLOC_EXPORT extern
531#endif
532
533#ifndef WIN32
534#ifdef _WIN32
535#define WIN32 1
536#endif /* _WIN32 */
537#ifdef _WIN32_WCE
538#define LACKS_FCNTL_H
539#define WIN32 1
540#endif /* _WIN32_WCE */
541#endif /* WIN32 */
542#ifdef WIN32
543#define WIN32_LEAN_AND_MEAN
544#include <windows.h>
545#include <tchar.h>
546#define HAVE_MMAP 1
547#define HAVE_MORECORE 0
548#define LACKS_UNISTD_H
549#define LACKS_SYS_PARAM_H
550#define LACKS_SYS_MMAN_H
551#define LACKS_STRING_H
552#define LACKS_STRINGS_H
553#define LACKS_SYS_TYPES_H
554#define LACKS_ERRNO_H
555#define LACKS_SCHED_H
556#ifndef MALLOC_FAILURE_ACTION
557#define MALLOC_FAILURE_ACTION
558#endif /* MALLOC_FAILURE_ACTION */
559#ifndef MMAP_CLEARS
560#ifdef _WIN32_WCE /* WINCE reportedly does not clear */
561#define MMAP_CLEARS 0
562#else
563#define MMAP_CLEARS 1
564#endif /* _WIN32_WCE */
565#endif /*MMAP_CLEARS */
566#endif /* WIN32 */
567
568#if defined(DARWIN) || defined(_DARWIN)
569/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
570#ifndef HAVE_MORECORE
571#define HAVE_MORECORE 0
572#define HAVE_MMAP 1
573/* OSX allocators provide 16 byte alignment */
574#ifndef MALLOC_ALIGNMENT
575#define MALLOC_ALIGNMENT ((size_t)16U)
576#endif
577#endif /* HAVE_MORECORE */
578#endif /* DARWIN */
579
580#ifndef LACKS_SYS_TYPES_H
581#include <sys/types.h> /* For size_t */
582#endif /* LACKS_SYS_TYPES_H */
583
584/* The maximum possible size_t value has all bits set */
585#define MAX_SIZE_T (~(size_t)0)
586
587#ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
588#define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
589 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
590#endif /* USE_LOCKS */
591
592#if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
593#if ((defined(__GNUC__) && \
594 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
595 defined(__i386__) || defined(__x86_64__))) || \
596 (defined(_MSC_VER) && _MSC_VER>=1310))
597#ifndef USE_SPIN_LOCKS
598#define USE_SPIN_LOCKS 1
599#endif /* USE_SPIN_LOCKS */
600#elif USE_SPIN_LOCKS
601#error "USE_SPIN_LOCKS defined without implementation"
602#endif /* ... locks available... */
603#elif !defined(USE_SPIN_LOCKS)
604#define USE_SPIN_LOCKS 0
605#endif /* USE_LOCKS */
606
607#ifndef ONLY_MSPACES
608#define ONLY_MSPACES 0
609#endif /* ONLY_MSPACES */
610#ifndef MSPACES
611#if ONLY_MSPACES
612#define MSPACES 1
613#else /* ONLY_MSPACES */
614#define MSPACES 0
615#endif /* ONLY_MSPACES */
616#endif /* MSPACES */
617#ifndef MALLOC_ALIGNMENT
Ian Rogersc6d95ad2012-08-29 14:04:53 -0700618#define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
Ian Rogers99908912012-08-17 17:28:15 -0700619#endif /* MALLOC_ALIGNMENT */
620#ifndef FOOTERS
621#define FOOTERS 0
622#endif /* FOOTERS */
623#ifndef ABORT
624#define ABORT abort()
625#endif /* ABORT */
626#ifndef ABORT_ON_ASSERT_FAILURE
627#define ABORT_ON_ASSERT_FAILURE 1
628#endif /* ABORT_ON_ASSERT_FAILURE */
629#ifndef PROCEED_ON_ERROR
630#define PROCEED_ON_ERROR 0
631#endif /* PROCEED_ON_ERROR */
632
633#ifndef INSECURE
634#define INSECURE 0
635#endif /* INSECURE */
636#ifndef MALLOC_INSPECT_ALL
637#define MALLOC_INSPECT_ALL 0
638#endif /* MALLOC_INSPECT_ALL */
639#ifndef HAVE_MMAP
640#define HAVE_MMAP 1
641#endif /* HAVE_MMAP */
642#ifndef MMAP_CLEARS
643#define MMAP_CLEARS 1
644#endif /* MMAP_CLEARS */
645#ifndef HAVE_MREMAP
646#ifdef linux
647#define HAVE_MREMAP 1
648#define _GNU_SOURCE /* Turns on mremap() definition */
649#else /* linux */
650#define HAVE_MREMAP 0
651#endif /* linux */
652#endif /* HAVE_MREMAP */
653#ifndef MALLOC_FAILURE_ACTION
654#define MALLOC_FAILURE_ACTION errno = ENOMEM;
655#endif /* MALLOC_FAILURE_ACTION */
656#ifndef HAVE_MORECORE
657#if ONLY_MSPACES
658#define HAVE_MORECORE 0
659#else /* ONLY_MSPACES */
660#define HAVE_MORECORE 1
661#endif /* ONLY_MSPACES */
662#endif /* HAVE_MORECORE */
663#if !HAVE_MORECORE
664#define MORECORE_CONTIGUOUS 0
665#else /* !HAVE_MORECORE */
666#define MORECORE_DEFAULT sbrk
667#ifndef MORECORE_CONTIGUOUS
668#define MORECORE_CONTIGUOUS 1
669#endif /* MORECORE_CONTIGUOUS */
670#endif /* HAVE_MORECORE */
671#ifndef DEFAULT_GRANULARITY
672#if (MORECORE_CONTIGUOUS || defined(WIN32))
673#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
674#else /* MORECORE_CONTIGUOUS */
675#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
676#endif /* MORECORE_CONTIGUOUS */
677#endif /* DEFAULT_GRANULARITY */
678#ifndef DEFAULT_TRIM_THRESHOLD
679#ifndef MORECORE_CANNOT_TRIM
680#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
681#else /* MORECORE_CANNOT_TRIM */
682#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
683#endif /* MORECORE_CANNOT_TRIM */
684#endif /* DEFAULT_TRIM_THRESHOLD */
685#ifndef DEFAULT_MMAP_THRESHOLD
686#if HAVE_MMAP
687#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
688#else /* HAVE_MMAP */
689#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
690#endif /* HAVE_MMAP */
691#endif /* DEFAULT_MMAP_THRESHOLD */
692#ifndef MAX_RELEASE_CHECK_RATE
693#if HAVE_MMAP
694#define MAX_RELEASE_CHECK_RATE 4095
695#else
696#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
697#endif /* HAVE_MMAP */
698#endif /* MAX_RELEASE_CHECK_RATE */
699#ifndef USE_BUILTIN_FFS
700#define USE_BUILTIN_FFS 0
701#endif /* USE_BUILTIN_FFS */
702#ifndef USE_DEV_RANDOM
703#define USE_DEV_RANDOM 0
704#endif /* USE_DEV_RANDOM */
705#ifndef NO_MALLINFO
706#define NO_MALLINFO 0
707#endif /* NO_MALLINFO */
708#ifndef MALLINFO_FIELD_TYPE
709#define MALLINFO_FIELD_TYPE size_t
710#endif /* MALLINFO_FIELD_TYPE */
711#ifndef NO_MALLOC_STATS
712#define NO_MALLOC_STATS 0
713#endif /* NO_MALLOC_STATS */
714#ifndef NO_SEGMENT_TRAVERSAL
715#define NO_SEGMENT_TRAVERSAL 0
716#endif /* NO_SEGMENT_TRAVERSAL */
717
718/*
719 mallopt tuning options. SVID/XPG defines four standard parameter
720 numbers for mallopt, normally defined in malloc.h. None of these
721 are used in this malloc, so setting them has no effect. But this
722 malloc does support the following options.
723*/
724
725#define M_TRIM_THRESHOLD (-1)
726#define M_GRANULARITY (-2)
727#define M_MMAP_THRESHOLD (-3)
728
729/* ------------------------ Mallinfo declarations ------------------------ */
730
731#if !NO_MALLINFO
732/*
733 This version of malloc supports the standard SVID/XPG mallinfo
734 routine that returns a struct containing usage properties and
735 statistics. It should work on any system that has a
736 /usr/include/malloc.h defining struct mallinfo. The main
737 declaration needed is the mallinfo struct that is returned (by-copy)
738 by mallinfo(). The malloinfo struct contains a bunch of fields that
739 are not even meaningful in this version of malloc. These fields are
740 are instead filled by mallinfo() with other numbers that might be of
741 interest.
742
743 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
744 /usr/include/malloc.h file that includes a declaration of struct
745 mallinfo. If so, it is included; else a compliant version is
746 declared below. These must be precisely the same for mallinfo() to
747 work. The original SVID version of this struct, defined on most
748 systems with mallinfo, declares all fields as ints. But some others
749 define as unsigned long. If your system defines the fields using a
750 type of different width than listed here, you MUST #include your
751 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
752*/
753
754/* #define HAVE_USR_INCLUDE_MALLOC_H */
755
756#ifdef HAVE_USR_INCLUDE_MALLOC_H
757#include "/usr/include/malloc.h"
758#else /* HAVE_USR_INCLUDE_MALLOC_H */
759#ifndef STRUCT_MALLINFO_DECLARED
760/* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
761#define _STRUCT_MALLINFO
762#define STRUCT_MALLINFO_DECLARED 1
763struct mallinfo {
764 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
765 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
766 MALLINFO_FIELD_TYPE smblks; /* always 0 */
767 MALLINFO_FIELD_TYPE hblks; /* always 0 */
768 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
769 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
770 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
771 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
772 MALLINFO_FIELD_TYPE fordblks; /* total free space */
773 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
774};
775#endif /* STRUCT_MALLINFO_DECLARED */
776#endif /* HAVE_USR_INCLUDE_MALLOC_H */
777#endif /* NO_MALLINFO */
778
779/*
780 Try to persuade compilers to inline. The most critical functions for
781 inlining are defined as macros, so these aren't used for them.
782*/
783
784#ifndef FORCEINLINE
785 #if defined(__GNUC__)
786#define FORCEINLINE __inline __attribute__ ((always_inline))
787 #elif defined(_MSC_VER)
788 #define FORCEINLINE __forceinline
789 #endif
790#endif
791#ifndef NOINLINE
792 #if defined(__GNUC__)
793 #define NOINLINE __attribute__ ((noinline))
794 #elif defined(_MSC_VER)
795 #define NOINLINE __declspec(noinline)
796 #else
797 #define NOINLINE
798 #endif
799#endif
800
801#ifdef __cplusplus
802extern "C" {
803#ifndef FORCEINLINE
804 #define FORCEINLINE inline
805#endif
806#endif /* __cplusplus */
807#ifndef FORCEINLINE
808 #define FORCEINLINE
809#endif
810
811#if !ONLY_MSPACES
812
813/* ------------------- Declarations of public routines ------------------- */
814
815#ifndef USE_DL_PREFIX
816#define dlcalloc calloc
817#define dlfree free
818#define dlmalloc malloc
819#define dlmemalign memalign
820#define dlposix_memalign posix_memalign
821#define dlrealloc realloc
822#define dlrealloc_in_place realloc_in_place
823#define dlvalloc valloc
824#define dlpvalloc pvalloc
825#define dlmallinfo mallinfo
826#define dlmallopt mallopt
827#define dlmalloc_trim malloc_trim
828#define dlmalloc_stats malloc_stats
829#define dlmalloc_usable_size malloc_usable_size
830#define dlmalloc_footprint malloc_footprint
831#define dlmalloc_max_footprint malloc_max_footprint
832#define dlmalloc_footprint_limit malloc_footprint_limit
833#define dlmalloc_set_footprint_limit malloc_set_footprint_limit
834#define dlmalloc_inspect_all malloc_inspect_all
835#define dlindependent_calloc independent_calloc
836#define dlindependent_comalloc independent_comalloc
837#define dlbulk_free bulk_free
838#endif /* USE_DL_PREFIX */
839
840/*
841 malloc(size_t n)
842 Returns a pointer to a newly allocated chunk of at least n bytes, or
843 null if no space is available, in which case errno is set to ENOMEM
844 on ANSI C systems.
845
846 If n is zero, malloc returns a minimum-sized chunk. (The minimum
847 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
848 systems.) Note that size_t is an unsigned type, so calls with
849 arguments that would be negative if signed are interpreted as
850 requests for huge amounts of space, which will often fail. The
851 maximum supported value of n differs across systems, but is in all
852 cases less than the maximum representable value of a size_t.
853*/
854DLMALLOC_EXPORT void* dlmalloc(size_t);
855
856/*
857 free(void* p)
858 Releases the chunk of memory pointed to by p, that had been previously
859 allocated using malloc or a related routine such as realloc.
860 It has no effect if p is null. If p was not malloced or already
861 freed, free(p) will by default cause the current program to abort.
862*/
863DLMALLOC_EXPORT void dlfree(void*);
864
865/*
866 calloc(size_t n_elements, size_t element_size);
867 Returns a pointer to n_elements * element_size bytes, with all locations
868 set to zero.
869*/
870DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
871
872/*
873 realloc(void* p, size_t n)
874 Returns a pointer to a chunk of size n that contains the same data
875 as does chunk p up to the minimum of (n, p's size) bytes, or null
876 if no space is available.
877
878 The returned pointer may or may not be the same as p. The algorithm
879 prefers extending p in most cases when possible, otherwise it
880 employs the equivalent of a malloc-copy-free sequence.
881
882 If p is null, realloc is equivalent to malloc.
883
884 If space is not available, realloc returns null, errno is set (if on
885 ANSI) and p is NOT freed.
886
887 if n is for fewer bytes than already held by p, the newly unused
888 space is lopped off and freed if possible. realloc with a size
889 argument of zero (re)allocates a minimum-sized chunk.
890
891 The old unix realloc convention of allowing the last-free'd chunk
892 to be used as an argument to realloc is not supported.
893*/
894DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
895
896/*
897 realloc_in_place(void* p, size_t n)
898 Resizes the space allocated for p to size n, only if this can be
899 done without moving p (i.e., only if there is adjacent space
900 available if n is greater than p's current allocated size, or n is
901 less than or equal to p's size). This may be used instead of plain
902 realloc if an alternative allocation strategy is needed upon failure
903 to expand space; for example, reallocation of a buffer that must be
904 memory-aligned or cleared. You can use realloc_in_place to trigger
905 these alternatives only when needed.
906
907 Returns p if successful; otherwise null.
908*/
909DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
910
911/*
912 memalign(size_t alignment, size_t n);
913 Returns a pointer to a newly allocated chunk of n bytes, aligned
914 in accord with the alignment argument.
915
916 The alignment argument should be a power of two. If the argument is
917 not a power of two, the nearest greater power is used.
918 8-byte alignment is guaranteed by normal malloc calls, so don't
919 bother calling memalign with an argument of 8 or less.
920
921 Overreliance on memalign is a sure way to fragment space.
922*/
923DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
924
925/*
926 int posix_memalign(void** pp, size_t alignment, size_t n);
927 Allocates a chunk of n bytes, aligned in accord with the alignment
928 argument. Differs from memalign only in that it (1) assigns the
929 allocated memory to *pp rather than returning it, (2) fails and
930 returns EINVAL if the alignment is not a power of two (3) fails and
931 returns ENOMEM if memory cannot be allocated.
932*/
933DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
934
935/*
936 valloc(size_t n);
937 Equivalent to memalign(pagesize, n), where pagesize is the page
938 size of the system. If the pagesize is unknown, 4096 is used.
939*/
940DLMALLOC_EXPORT void* dlvalloc(size_t);
941
942/*
943 mallopt(int parameter_number, int parameter_value)
944 Sets tunable parameters The format is to provide a
945 (parameter-number, parameter-value) pair. mallopt then sets the
946 corresponding parameter to the argument value if it can (i.e., so
947 long as the value is meaningful), and returns 1 if successful else
948 0. To workaround the fact that mallopt is specified to use int,
949 not size_t parameters, the value -1 is specially treated as the
950 maximum unsigned size_t value.
951
952 SVID/XPG/ANSI defines four standard param numbers for mallopt,
953 normally defined in malloc.h. None of these are use in this malloc,
954 so setting them has no effect. But this malloc also supports other
955 options in mallopt. See below for details. Briefly, supported
956 parameters are as follows (listed defaults are for "typical"
957 configurations).
958
959 Symbol param # default allowed param values
960 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
961 M_GRANULARITY -2 page size any power of 2 >= page size
962 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
963*/
964DLMALLOC_EXPORT int dlmallopt(int, int);
965
966/*
967 malloc_footprint();
968 Returns the number of bytes obtained from the system. The total
969 number of bytes allocated by malloc, realloc etc., is less than this
970 value. Unlike mallinfo, this function returns only a precomputed
971 result, so can be called frequently to monitor memory consumption.
972 Even if locks are otherwise defined, this function does not use them,
973 so results might not be up to date.
974*/
975DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
976
977/*
978 malloc_max_footprint();
979 Returns the maximum number of bytes obtained from the system. This
980 value will be greater than current footprint if deallocated space
981 has been reclaimed by the system. The peak number of bytes allocated
982 by malloc, realloc etc., is less than this value. Unlike mallinfo,
983 this function returns only a precomputed result, so can be called
984 frequently to monitor memory consumption. Even if locks are
985 otherwise defined, this function does not use them, so results might
986 not be up to date.
987*/
988DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
989
990/*
991 malloc_footprint_limit();
992 Returns the number of bytes that the heap is allowed to obtain from
993 the system, returning the last value returned by
994 malloc_set_footprint_limit, or the maximum size_t value if
995 never set. The returned value reflects a permission. There is no
996 guarantee that this number of bytes can actually be obtained from
997 the system.
998*/
999DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
1000
1001/*
1002 malloc_set_footprint_limit();
1003 Sets the maximum number of bytes to obtain from the system, causing
1004 failure returns from malloc and related functions upon attempts to
1005 exceed this value. The argument value may be subject to page
1006 rounding to an enforceable limit; this actual value is returned.
1007 Using an argument of the maximum possible size_t effectively
1008 disables checks. If the argument is less than or equal to the
1009 current malloc_footprint, then all future allocations that require
1010 additional system memory will fail. However, invocation cannot
1011 retroactively deallocate existing used memory.
1012*/
1013DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1014
1015#if MALLOC_INSPECT_ALL
1016/*
1017 malloc_inspect_all(void(*handler)(void *start,
1018 void *end,
1019 size_t used_bytes,
1020 void* callback_arg),
1021 void* arg);
1022 Traverses the heap and calls the given handler for each managed
1023 region, skipping all bytes that are (or may be) used for bookkeeping
1024 purposes. Traversal does not include include chunks that have been
1025 directly memory mapped. Each reported region begins at the start
1026 address, and continues up to but not including the end address. The
1027 first used_bytes of the region contain allocated data. If
1028 used_bytes is zero, the region is unallocated. The handler is
1029 invoked with the given callback argument. If locks are defined, they
1030 are held during the entire traversal. It is a bad idea to invoke
1031 other malloc functions from within the handler.
1032
1033 For example, to count the number of in-use chunks with size greater
1034 than 1000, you could write:
1035 static int count = 0;
1036 void count_chunks(void* start, void* end, size_t used, void* arg) {
1037 if (used >= 1000) ++count;
1038 }
1039 then:
1040 malloc_inspect_all(count_chunks, NULL);
1041
1042 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1043*/
1044DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1045 void* arg);
1046
1047#endif /* MALLOC_INSPECT_ALL */
1048
1049#if !NO_MALLINFO
1050/*
1051 mallinfo()
1052 Returns (by copy) a struct containing various summary statistics:
1053
1054 arena: current total non-mmapped bytes allocated from system
1055 ordblks: the number of free chunks
1056 smblks: always zero.
1057 hblks: current number of mmapped regions
1058 hblkhd: total bytes held in mmapped regions
1059 usmblks: the maximum total allocated space. This will be greater
1060 than current total if trimming has occurred.
1061 fsmblks: always zero
1062 uordblks: current total allocated space (normal or mmapped)
1063 fordblks: total free space
1064 keepcost: the maximum number of bytes that could ideally be released
1065 back to system via malloc_trim. ("ideally" means that
1066 it ignores page restrictions etc.)
1067
1068 Because these fields are ints, but internal bookkeeping may
1069 be kept as longs, the reported values may wrap around zero and
1070 thus be inaccurate.
1071*/
1072DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1073#endif /* NO_MALLINFO */
1074
1075/*
1076 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1077
1078 independent_calloc is similar to calloc, but instead of returning a
1079 single cleared space, it returns an array of pointers to n_elements
1080 independent elements that can hold contents of size elem_size, each
1081 of which starts out cleared, and can be independently freed,
1082 realloc'ed etc. The elements are guaranteed to be adjacently
1083 allocated (this is not guaranteed to occur with multiple callocs or
1084 mallocs), which may also improve cache locality in some
1085 applications.
1086
1087 The "chunks" argument is optional (i.e., may be null, which is
1088 probably the most typical usage). If it is null, the returned array
1089 is itself dynamically allocated and should also be freed when it is
1090 no longer needed. Otherwise, the chunks array must be of at least
1091 n_elements in length. It is filled in with the pointers to the
1092 chunks.
1093
1094 In either case, independent_calloc returns this pointer array, or
1095 null if the allocation failed. If n_elements is zero and "chunks"
1096 is null, it returns a chunk representing an array with zero elements
1097 (which should be freed if not wanted).
1098
1099 Each element must be freed when it is no longer needed. This can be
1100 done all at once using bulk_free.
1101
1102 independent_calloc simplifies and speeds up implementations of many
1103 kinds of pools. It may also be useful when constructing large data
1104 structures that initially have a fixed number of fixed-sized nodes,
1105 but the number is not known at compile time, and some of the nodes
1106 may later need to be freed. For example:
1107
1108 struct Node { int item; struct Node* next; };
1109
1110 struct Node* build_list() {
1111 struct Node** pool;
1112 int n = read_number_of_nodes_needed();
1113 if (n <= 0) return 0;
1114 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1115 if (pool == 0) die();
1116 // organize into a linked list...
1117 struct Node* first = pool[0];
1118 for (i = 0; i < n-1; ++i)
1119 pool[i]->next = pool[i+1];
1120 free(pool); // Can now free the array (or not, if it is needed later)
1121 return first;
1122 }
1123*/
1124DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1125
1126/*
1127 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1128
1129 independent_comalloc allocates, all at once, a set of n_elements
1130 chunks with sizes indicated in the "sizes" array. It returns
1131 an array of pointers to these elements, each of which can be
1132 independently freed, realloc'ed etc. The elements are guaranteed to
1133 be adjacently allocated (this is not guaranteed to occur with
1134 multiple callocs or mallocs), which may also improve cache locality
1135 in some applications.
1136
1137 The "chunks" argument is optional (i.e., may be null). If it is null
1138 the returned array is itself dynamically allocated and should also
1139 be freed when it is no longer needed. Otherwise, the chunks array
1140 must be of at least n_elements in length. It is filled in with the
1141 pointers to the chunks.
1142
1143 In either case, independent_comalloc returns this pointer array, or
1144 null if the allocation failed. If n_elements is zero and chunks is
1145 null, it returns a chunk representing an array with zero elements
1146 (which should be freed if not wanted).
1147
1148 Each element must be freed when it is no longer needed. This can be
1149 done all at once using bulk_free.
1150
1151 independent_comallac differs from independent_calloc in that each
1152 element may have a different size, and also that it does not
1153 automatically clear elements.
1154
1155 independent_comalloc can be used to speed up allocation in cases
1156 where several structs or objects must always be allocated at the
1157 same time. For example:
1158
1159 struct Head { ... }
1160 struct Foot { ... }
1161
1162 void send_message(char* msg) {
1163 int msglen = strlen(msg);
1164 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1165 void* chunks[3];
1166 if (independent_comalloc(3, sizes, chunks) == 0)
1167 die();
1168 struct Head* head = (struct Head*)(chunks[0]);
1169 char* body = (char*)(chunks[1]);
1170 struct Foot* foot = (struct Foot*)(chunks[2]);
1171 // ...
1172 }
1173
1174 In general though, independent_comalloc is worth using only for
1175 larger values of n_elements. For small values, you probably won't
1176 detect enough difference from series of malloc calls to bother.
1177
1178 Overuse of independent_comalloc can increase overall memory usage,
1179 since it cannot reuse existing noncontiguous small chunks that
1180 might be available for some of the elements.
1181*/
1182DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1183
1184/*
1185 bulk_free(void* array[], size_t n_elements)
1186 Frees and clears (sets to null) each non-null pointer in the given
1187 array. This is likely to be faster than freeing them one-by-one.
1188 If footers are used, pointers that have been allocated in different
1189 mspaces are not freed or cleared, and the count of all such pointers
1190 is returned. For large arrays of pointers with poor locality, it
1191 may be worthwhile to sort this array before calling bulk_free.
1192*/
1193DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1194
1195/*
1196 pvalloc(size_t n);
1197 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1198 round up n to nearest pagesize.
1199 */
1200DLMALLOC_EXPORT void* dlpvalloc(size_t);
1201
1202/*
1203 malloc_trim(size_t pad);
1204
1205 If possible, gives memory back to the system (via negative arguments
1206 to sbrk) if there is unused memory at the `high' end of the malloc
1207 pool or in unused MMAP segments. You can call this after freeing
1208 large blocks of memory to potentially reduce the system-level memory
1209 requirements of a program. However, it cannot guarantee to reduce
1210 memory. Under some allocation patterns, some large free blocks of
1211 memory will be locked between two used chunks, so they cannot be
1212 given back to the system.
1213
1214 The `pad' argument to malloc_trim represents the amount of free
1215 trailing space to leave untrimmed. If this argument is zero, only
1216 the minimum amount of memory to maintain internal data structures
1217 will be left. Non-zero arguments can be supplied to maintain enough
1218 trailing space to service future expected allocations without having
1219 to re-obtain memory from the system.
1220
1221 Malloc_trim returns 1 if it actually released any memory, else 0.
1222*/
1223DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1224
1225/*
1226 malloc_stats();
1227 Prints on stderr the amount of space obtained from the system (both
1228 via sbrk and mmap), the maximum amount (which may be more than
1229 current if malloc_trim and/or munmap got called), and the current
1230 number of bytes allocated via malloc (or realloc, etc) but not yet
1231 freed. Note that this is the number of bytes allocated, not the
1232 number requested. It will be larger than the number requested
1233 because of alignment and bookkeeping overhead. Because it includes
1234 alignment wastage as being in use, this figure may be greater than
1235 zero even when no user-level chunks are allocated.
1236
1237 The reported current and maximum system memory can be inaccurate if
1238 a program makes other calls to system memory allocation functions
1239 (normally sbrk) outside of malloc.
1240
1241 malloc_stats prints only the most commonly interesting statistics.
1242 More information can be obtained by calling mallinfo.
1243*/
1244DLMALLOC_EXPORT void dlmalloc_stats(void);
1245
Ian Rogers99908912012-08-17 17:28:15 -07001246/*
1247 malloc_usable_size(void* p);
1248
1249 Returns the number of bytes you can actually use in
1250 an allocated chunk, which may be more than you requested (although
1251 often not) due to alignment and minimum size constraints.
1252 You can use this many bytes without worrying about
1253 overwriting other allocated objects. This is not a particularly great
1254 programming practice. malloc_usable_size can be more useful in
1255 debugging and assertions, for example:
1256
1257 p = malloc(n);
1258 assert(malloc_usable_size(p) >= 256);
1259*/
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001260/* BEGIN android-changed: added const */
1261size_t dlmalloc_usable_size(const void*);
1262/* END android-change */
1263
1264#endif /* ONLY_MSPACES */
Ian Rogers99908912012-08-17 17:28:15 -07001265
1266#if MSPACES
1267
1268/*
1269 mspace is an opaque type representing an independent
1270 region of space that supports mspace_malloc, etc.
1271*/
1272typedef void* mspace;
1273
1274/*
1275 create_mspace creates and returns a new independent space with the
1276 given initial capacity, or, if 0, the default granularity size. It
1277 returns null if there is no system memory available to create the
1278 space. If argument locked is non-zero, the space uses a separate
1279 lock to control access. The capacity of the space will grow
1280 dynamically as needed to service mspace_malloc requests. You can
1281 control the sizes of incremental increases of this space by
1282 compiling with a different DEFAULT_GRANULARITY or dynamically
1283 setting with mallopt(M_GRANULARITY, value).
1284*/
1285DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1286
1287/*
1288 destroy_mspace destroys the given space, and attempts to return all
1289 of its memory back to the system, returning the total number of
1290 bytes freed. After destruction, the results of access to all memory
1291 used by the space become undefined.
1292*/
1293DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1294
1295/*
1296 create_mspace_with_base uses the memory supplied as the initial base
1297 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1298 space is used for bookkeeping, so the capacity must be at least this
1299 large. (Otherwise 0 is returned.) When this initial space is
1300 exhausted, additional memory will be obtained from the system.
1301 Destroying this space will deallocate all additionally allocated
1302 space (if possible) but not the initial base.
1303*/
1304DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1305
1306/*
1307 mspace_track_large_chunks controls whether requests for large chunks
1308 are allocated in their own untracked mmapped regions, separate from
1309 others in this mspace. By default large chunks are not tracked,
1310 which reduces fragmentation. However, such chunks are not
1311 necessarily released to the system upon destroy_mspace. Enabling
1312 tracking by setting to true may increase fragmentation, but avoids
1313 leakage when relying on destroy_mspace to release all memory
1314 allocated using this space. The function returns the previous
1315 setting.
1316*/
1317DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1318
1319
1320/*
1321 mspace_malloc behaves as malloc, but operates within
1322 the given space.
1323*/
1324DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1325
1326/*
1327 mspace_free behaves as free, but operates within
1328 the given space.
1329
1330 If compiled with FOOTERS==1, mspace_free is not actually needed.
1331 free may be called instead of mspace_free because freed chunks from
1332 any space are handled by their originating spaces.
1333*/
1334DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1335
1336/*
1337 mspace_realloc behaves as realloc, but operates within
1338 the given space.
1339
1340 If compiled with FOOTERS==1, mspace_realloc is not actually
1341 needed. realloc may be called instead of mspace_realloc because
1342 realloced chunks from any space are handled by their originating
1343 spaces.
1344*/
1345DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1346
1347/*
1348 mspace_calloc behaves as calloc, but operates within
1349 the given space.
1350*/
1351DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1352
1353/*
1354 mspace_memalign behaves as memalign, but operates within
1355 the given space.
1356*/
1357DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1358
1359/*
1360 mspace_independent_calloc behaves as independent_calloc, but
1361 operates within the given space.
1362*/
1363DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1364 size_t elem_size, void* chunks[]);
1365
1366/*
1367 mspace_independent_comalloc behaves as independent_comalloc, but
1368 operates within the given space.
1369*/
1370DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1371 size_t sizes[], void* chunks[]);
1372
1373/*
1374 mspace_footprint() returns the number of bytes obtained from the
1375 system for this space.
1376*/
1377DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1378
1379/*
1380 mspace_max_footprint() returns the peak number of bytes obtained from the
1381 system for this space.
1382*/
1383DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1384
1385
1386#if !NO_MALLINFO
1387/*
1388 mspace_mallinfo behaves as mallinfo, but reports properties of
1389 the given space.
1390*/
1391DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1392#endif /* NO_MALLINFO */
1393
1394/*
1395 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1396*/
Ian Rogers99908912012-08-17 17:28:15 -07001397DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
Ian Rogers99908912012-08-17 17:28:15 -07001398
1399/*
1400 mspace_malloc_stats behaves as malloc_stats, but reports
1401 properties of the given space.
1402*/
1403DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1404
1405/*
1406 mspace_trim behaves as malloc_trim, but
1407 operates within the given space.
1408*/
1409DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1410
1411/*
1412 An alias for mallopt.
1413*/
1414DLMALLOC_EXPORT int mspace_mallopt(int, int);
1415
1416#endif /* MSPACES */
1417
1418#ifdef __cplusplus
1419} /* end of extern "C" */
1420#endif /* __cplusplus */
1421
1422/*
1423 ========================================================================
1424 To make a fully customizable malloc.h header file, cut everything
1425 above this line, put into file malloc.h, edit to suit, and #include it
1426 on the next line, as well as in programs that use this malloc.
1427 ========================================================================
1428*/
1429
1430/* #include "malloc.h" */
1431
1432/*------------------------------ internal #includes ---------------------- */
1433
1434#ifdef _MSC_VER
1435#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1436#endif /* _MSC_VER */
1437#if !NO_MALLOC_STATS
1438#include <stdio.h> /* for printing in malloc_stats */
1439#endif /* NO_MALLOC_STATS */
1440#ifndef LACKS_ERRNO_H
1441#include <errno.h> /* for MALLOC_FAILURE_ACTION */
1442#endif /* LACKS_ERRNO_H */
1443#ifdef DEBUG
1444#if ABORT_ON_ASSERT_FAILURE
1445#undef assert
1446#define assert(x) if(!(x)) ABORT
1447#else /* ABORT_ON_ASSERT_FAILURE */
1448#include <assert.h>
1449#endif /* ABORT_ON_ASSERT_FAILURE */
1450#else /* DEBUG */
1451#ifndef assert
1452#define assert(x)
1453#endif
1454#define DEBUG 0
1455#endif /* DEBUG */
1456#if !defined(WIN32) && !defined(LACKS_TIME_H)
1457#include <time.h> /* for magic initialization */
1458#endif /* WIN32 */
1459#ifndef LACKS_STDLIB_H
1460#include <stdlib.h> /* for abort() */
1461#endif /* LACKS_STDLIB_H */
1462#ifndef LACKS_STRING_H
1463#include <string.h> /* for memset etc */
1464#endif /* LACKS_STRING_H */
1465#if USE_BUILTIN_FFS
1466#ifndef LACKS_STRINGS_H
1467#include <strings.h> /* for ffs */
1468#endif /* LACKS_STRINGS_H */
1469#endif /* USE_BUILTIN_FFS */
1470#if HAVE_MMAP
1471#ifndef LACKS_SYS_MMAN_H
1472/* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1473#if (defined(linux) && !defined(__USE_GNU))
1474#define __USE_GNU 1
1475#include <sys/mman.h> /* for mmap */
1476#undef __USE_GNU
1477#else
1478#include <sys/mman.h> /* for mmap */
1479#endif /* linux */
1480#endif /* LACKS_SYS_MMAN_H */
1481#ifndef LACKS_FCNTL_H
1482#include <fcntl.h>
1483#endif /* LACKS_FCNTL_H */
1484#endif /* HAVE_MMAP */
1485#ifndef LACKS_UNISTD_H
1486#include <unistd.h> /* for sbrk, sysconf */
1487#else /* LACKS_UNISTD_H */
1488#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1489extern void* sbrk(ptrdiff_t);
1490#endif /* FreeBSD etc */
1491#endif /* LACKS_UNISTD_H */
1492
1493/* Declarations for locking */
1494#if USE_LOCKS
1495#ifndef WIN32
1496#if defined (__SVR4) && defined (__sun) /* solaris */
1497#include <thread.h>
1498#elif !defined(LACKS_SCHED_H)
1499#include <sched.h>
1500#endif /* solaris or LACKS_SCHED_H */
1501#if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1502#include <pthread.h>
1503#endif /* USE_RECURSIVE_LOCKS ... */
1504#elif defined(_MSC_VER)
1505#ifndef _M_AMD64
1506/* These are already defined on AMD64 builds */
1507#ifdef __cplusplus
1508extern "C" {
1509#endif /* __cplusplus */
1510LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1511LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1512#ifdef __cplusplus
1513}
1514#endif /* __cplusplus */
1515#endif /* _M_AMD64 */
1516#pragma intrinsic (_InterlockedCompareExchange)
1517#pragma intrinsic (_InterlockedExchange)
1518#define interlockedcompareexchange _InterlockedCompareExchange
1519#define interlockedexchange _InterlockedExchange
1520#elif defined(WIN32) && defined(__GNUC__)
1521#define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1522#define interlockedexchange __sync_lock_test_and_set
1523#endif /* Win32 */
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001524#else /* USE_LOCKS */
Ian Rogers99908912012-08-17 17:28:15 -07001525#endif /* USE_LOCKS */
1526
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001527#ifndef LOCK_AT_FORK
1528#define LOCK_AT_FORK 0
1529#endif
1530
Ian Rogers99908912012-08-17 17:28:15 -07001531/* Declarations for bit scanning on win32 */
1532#if defined(_MSC_VER) && _MSC_VER>=1300
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001533#ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
Ian Rogers99908912012-08-17 17:28:15 -07001534#ifdef __cplusplus
1535extern "C" {
1536#endif /* __cplusplus */
1537unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1538unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1539#ifdef __cplusplus
1540}
1541#endif /* __cplusplus */
1542
1543#define BitScanForward _BitScanForward
1544#define BitScanReverse _BitScanReverse
1545#pragma intrinsic(_BitScanForward)
1546#pragma intrinsic(_BitScanReverse)
1547#endif /* BitScanForward */
1548#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1549
1550#ifndef WIN32
1551#ifndef malloc_getpagesize
1552# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1553# ifndef _SC_PAGE_SIZE
1554# define _SC_PAGE_SIZE _SC_PAGESIZE
1555# endif
1556# endif
1557# ifdef _SC_PAGE_SIZE
1558# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1559# else
1560# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1561 extern size_t getpagesize();
1562# define malloc_getpagesize getpagesize()
1563# else
1564# ifdef WIN32 /* use supplied emulation of getpagesize */
1565# define malloc_getpagesize getpagesize()
1566# else
1567# ifndef LACKS_SYS_PARAM_H
1568# include <sys/param.h>
1569# endif
1570# ifdef EXEC_PAGESIZE
1571# define malloc_getpagesize EXEC_PAGESIZE
1572# else
1573# ifdef NBPG
1574# ifndef CLSIZE
1575# define malloc_getpagesize NBPG
1576# else
1577# define malloc_getpagesize (NBPG * CLSIZE)
1578# endif
1579# else
1580# ifdef NBPC
1581# define malloc_getpagesize NBPC
1582# else
1583# ifdef PAGESIZE
1584# define malloc_getpagesize PAGESIZE
1585# else /* just guess */
1586# define malloc_getpagesize ((size_t)4096U)
1587# endif
1588# endif
1589# endif
1590# endif
1591# endif
1592# endif
1593# endif
1594#endif
1595#endif
1596
1597/* ------------------- size_t and alignment properties -------------------- */
1598
1599/* The byte and bit size of a size_t */
1600#define SIZE_T_SIZE (sizeof(size_t))
1601#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1602
1603/* Some constants coerced to size_t */
1604/* Annoying but necessary to avoid errors on some platforms */
1605#define SIZE_T_ZERO ((size_t)0)
1606#define SIZE_T_ONE ((size_t)1)
1607#define SIZE_T_TWO ((size_t)2)
1608#define SIZE_T_FOUR ((size_t)4)
1609#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1610#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1611#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1612#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1613
1614/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1615#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1616
1617/* True if address a has acceptable alignment */
1618#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1619
1620/* the number of bytes to offset an address to align it */
1621#define align_offset(A)\
1622 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1623 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1624
1625/* -------------------------- MMAP preliminaries ------------------------- */
1626
1627/*
1628 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1629 checks to fail so compiler optimizer can delete code rather than
1630 using so many "#if"s.
1631*/
1632
1633
1634/* MORECORE and MMAP must return MFAIL on failure */
1635#define MFAIL ((void*)(MAX_SIZE_T))
1636#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1637
1638#if HAVE_MMAP
1639
1640#ifndef WIN32
1641#define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1642#define MMAP_PROT (PROT_READ|PROT_WRITE)
1643#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1644#define MAP_ANONYMOUS MAP_ANON
1645#endif /* MAP_ANON */
1646#ifdef MAP_ANONYMOUS
1647#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1648#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1649#else /* MAP_ANONYMOUS */
1650/*
1651 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1652 is unlikely to be needed, but is supplied just in case.
1653*/
1654#define MMAP_FLAGS (MAP_PRIVATE)
1655static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1656#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1657 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1658 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1659 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1660#endif /* MAP_ANONYMOUS */
1661
1662#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1663
1664#else /* WIN32 */
1665
1666/* Win32 MMAP via VirtualAlloc */
1667static FORCEINLINE void* win32mmap(size_t size) {
1668 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1669 return (ptr != 0)? ptr: MFAIL;
1670}
1671
1672/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1673static FORCEINLINE void* win32direct_mmap(size_t size) {
1674 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1675 PAGE_READWRITE);
1676 return (ptr != 0)? ptr: MFAIL;
1677}
1678
1679/* This function supports releasing coalesed segments */
1680static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1681 MEMORY_BASIC_INFORMATION minfo;
1682 char* cptr = (char*)ptr;
1683 while (size) {
1684 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1685 return -1;
1686 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1687 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1688 return -1;
1689 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1690 return -1;
1691 cptr += minfo.RegionSize;
1692 size -= minfo.RegionSize;
1693 }
1694 return 0;
1695}
1696
1697#define MMAP_DEFAULT(s) win32mmap(s)
1698#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1699#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1700#endif /* WIN32 */
1701#endif /* HAVE_MMAP */
1702
1703#if HAVE_MREMAP
1704#ifndef WIN32
1705#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1706#endif /* WIN32 */
1707#endif /* HAVE_MREMAP */
1708
1709/**
1710 * Define CALL_MORECORE
1711 */
1712#if HAVE_MORECORE
1713 #ifdef MORECORE
1714 #define CALL_MORECORE(S) MORECORE(S)
1715 #else /* MORECORE */
1716 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1717 #endif /* MORECORE */
1718#else /* HAVE_MORECORE */
1719 #define CALL_MORECORE(S) MFAIL
1720#endif /* HAVE_MORECORE */
1721
1722/**
1723 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1724 */
1725#if HAVE_MMAP
1726 #define USE_MMAP_BIT (SIZE_T_ONE)
1727
1728 #ifdef MMAP
1729 #define CALL_MMAP(s) MMAP(s)
1730 #else /* MMAP */
1731 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1732 #endif /* MMAP */
1733 #ifdef MUNMAP
1734 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1735 #else /* MUNMAP */
1736 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1737 #endif /* MUNMAP */
1738 #ifdef DIRECT_MMAP
1739 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1740 #else /* DIRECT_MMAP */
1741 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1742 #endif /* DIRECT_MMAP */
1743#else /* HAVE_MMAP */
1744 #define USE_MMAP_BIT (SIZE_T_ZERO)
1745
1746 #define MMAP(s) MFAIL
1747 #define MUNMAP(a, s) (-1)
1748 #define DIRECT_MMAP(s) MFAIL
1749 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1750 #define CALL_MMAP(s) MMAP(s)
1751 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1752#endif /* HAVE_MMAP */
1753
1754/**
1755 * Define CALL_MREMAP
1756 */
1757#if HAVE_MMAP && HAVE_MREMAP
1758 #ifdef MREMAP
1759 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1760 #else /* MREMAP */
1761 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1762 #endif /* MREMAP */
1763#else /* HAVE_MMAP && HAVE_MREMAP */
1764 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1765#endif /* HAVE_MMAP && HAVE_MREMAP */
1766
1767/* mstate bit set if continguous morecore disabled or failed */
1768#define USE_NONCONTIGUOUS_BIT (4U)
1769
1770/* segment bit set in create_mspace_with_base */
1771#define EXTERN_BIT (8U)
1772
1773
1774/* --------------------------- Lock preliminaries ------------------------ */
1775
1776/*
1777 When locks are defined, there is one global lock, plus
1778 one per-mspace lock.
1779
1780 The global lock_ensures that mparams.magic and other unique
1781 mparams values are initialized only once. It also protects
1782 sequences of calls to MORECORE. In many cases sys_alloc requires
1783 two calls, that should not be interleaved with calls by other
1784 threads. This does not protect against direct calls to MORECORE
1785 by other threads not using this lock, so there is still code to
1786 cope the best we can on interference.
1787
1788 Per-mspace locks surround calls to malloc, free, etc.
1789 By default, locks are simple non-reentrant mutexes.
1790
1791 Because lock-protected regions generally have bounded times, it is
1792 OK to use the supplied simple spinlocks. Spinlocks are likely to
1793 improve performance for lightly contended applications, but worsen
1794 performance under heavy contention.
1795
1796 If USE_LOCKS is > 1, the definitions of lock routines here are
1797 bypassed, in which case you will need to define the type MLOCK_T,
1798 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1799 and TRY_LOCK. You must also declare a
1800 static MLOCK_T malloc_global_mutex = { initialization values };.
1801
1802*/
1803
1804#if !USE_LOCKS
1805#define USE_LOCK_BIT (0U)
1806#define INITIAL_LOCK(l) (0)
1807#define DESTROY_LOCK(l) (0)
1808#define ACQUIRE_MALLOC_GLOBAL_LOCK()
1809#define RELEASE_MALLOC_GLOBAL_LOCK()
1810
1811#else
1812#if USE_LOCKS > 1
1813/* ----------------------- User-defined locks ------------------------ */
1814/* Define your own lock implementation here */
1815/* #define INITIAL_LOCK(lk) ... */
1816/* #define DESTROY_LOCK(lk) ... */
1817/* #define ACQUIRE_LOCK(lk) ... */
1818/* #define RELEASE_LOCK(lk) ... */
1819/* #define TRY_LOCK(lk) ... */
1820/* static MLOCK_T malloc_global_mutex = ... */
1821
1822#elif USE_SPIN_LOCKS
1823
1824/* First, define CAS_LOCK and CLEAR_LOCK on ints */
1825/* Note CAS_LOCK defined to return 0 on success */
1826
1827#if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1828#define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1829#define CLEAR_LOCK(sl) __sync_lock_release(sl)
1830
1831#elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1832/* Custom spin locks for older gcc on x86 */
1833static FORCEINLINE int x86_cas_lock(int *sl) {
1834 int ret;
1835 int val = 1;
1836 int cmp = 0;
1837 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1838 : "=a" (ret)
1839 : "r" (val), "m" (*(sl)), "0"(cmp)
1840 : "memory", "cc");
1841 return ret;
1842}
1843
1844static FORCEINLINE void x86_clear_lock(int* sl) {
1845 assert(*sl != 0);
1846 int prev = 0;
1847 int ret;
1848 __asm__ __volatile__ ("lock; xchgl %0, %1"
1849 : "=r" (ret)
1850 : "m" (*(sl)), "0"(prev)
1851 : "memory");
1852}
1853
1854#define CAS_LOCK(sl) x86_cas_lock(sl)
1855#define CLEAR_LOCK(sl) x86_clear_lock(sl)
1856
1857#else /* Win32 MSC */
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001858#define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1859#define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
Ian Rogers99908912012-08-17 17:28:15 -07001860
1861#endif /* ... gcc spins locks ... */
1862
1863/* How to yield for a spin lock */
1864#define SPINS_PER_YIELD 63
1865#if defined(_MSC_VER)
1866#define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1867#define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1868#elif defined (__SVR4) && defined (__sun) /* solaris */
1869#define SPIN_LOCK_YIELD thr_yield();
1870#elif !defined(LACKS_SCHED_H)
1871#define SPIN_LOCK_YIELD sched_yield();
1872#else
1873#define SPIN_LOCK_YIELD
1874#endif /* ... yield ... */
1875
1876#if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1877/* Plain spin locks use single word (embedded in malloc_states) */
1878static int spin_acquire_lock(int *sl) {
1879 int spins = 0;
1880 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1881 if ((++spins & SPINS_PER_YIELD) == 0) {
1882 SPIN_LOCK_YIELD;
1883 }
1884 }
1885 return 0;
1886}
1887
1888#define MLOCK_T int
1889#define TRY_LOCK(sl) !CAS_LOCK(sl)
1890#define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1891#define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1892#define INITIAL_LOCK(sl) (*sl = 0)
1893#define DESTROY_LOCK(sl) (0)
1894static MLOCK_T malloc_global_mutex = 0;
1895
1896#else /* USE_RECURSIVE_LOCKS */
1897/* types for lock owners */
1898#ifdef WIN32
1899#define THREAD_ID_T DWORD
1900#define CURRENT_THREAD GetCurrentThreadId()
1901#define EQ_OWNER(X,Y) ((X) == (Y))
1902#else
1903/*
1904 Note: the following assume that pthread_t is a type that can be
1905 initialized to (casted) zero. If this is not the case, you will need to
1906 somehow redefine these or not use spin locks.
1907*/
1908#define THREAD_ID_T pthread_t
1909#define CURRENT_THREAD pthread_self()
1910#define EQ_OWNER(X,Y) pthread_equal(X, Y)
1911#endif
1912
1913struct malloc_recursive_lock {
1914 int sl;
1915 unsigned int c;
1916 THREAD_ID_T threadid;
1917};
1918
1919#define MLOCK_T struct malloc_recursive_lock
1920static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1921
1922static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1923 assert(lk->sl != 0);
1924 if (--lk->c == 0) {
1925 CLEAR_LOCK(&lk->sl);
1926 }
1927}
1928
1929static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1930 THREAD_ID_T mythreadid = CURRENT_THREAD;
1931 int spins = 0;
1932 for (;;) {
1933 if (*((volatile int *)(&lk->sl)) == 0) {
1934 if (!CAS_LOCK(&lk->sl)) {
1935 lk->threadid = mythreadid;
1936 lk->c = 1;
1937 return 0;
1938 }
1939 }
1940 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1941 ++lk->c;
1942 return 0;
1943 }
1944 if ((++spins & SPINS_PER_YIELD) == 0) {
1945 SPIN_LOCK_YIELD;
1946 }
1947 }
1948}
1949
1950static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1951 THREAD_ID_T mythreadid = CURRENT_THREAD;
1952 if (*((volatile int *)(&lk->sl)) == 0) {
1953 if (!CAS_LOCK(&lk->sl)) {
1954 lk->threadid = mythreadid;
1955 lk->c = 1;
1956 return 1;
1957 }
1958 }
1959 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1960 ++lk->c;
1961 return 1;
1962 }
1963 return 0;
1964}
1965
1966#define RELEASE_LOCK(lk) recursive_release_lock(lk)
1967#define TRY_LOCK(lk) recursive_try_lock(lk)
1968#define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1969#define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1970#define DESTROY_LOCK(lk) (0)
1971#endif /* USE_RECURSIVE_LOCKS */
1972
1973#elif defined(WIN32) /* Win32 critical sections */
1974#define MLOCK_T CRITICAL_SECTION
1975#define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1976#define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1977#define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1978#define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1979#define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1980#define NEED_GLOBAL_LOCK_INIT
1981
1982static MLOCK_T malloc_global_mutex;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001983static volatile LONG malloc_global_mutex_status;
Ian Rogers99908912012-08-17 17:28:15 -07001984
1985/* Use spin loop to initialize global lock */
1986static void init_malloc_global_mutex() {
1987 for (;;) {
1988 long stat = malloc_global_mutex_status;
1989 if (stat > 0)
1990 return;
1991 /* transition to < 0 while initializing, then to > 0) */
1992 if (stat == 0 &&
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001993 interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
Ian Rogers99908912012-08-17 17:28:15 -07001994 InitializeCriticalSection(&malloc_global_mutex);
Ian Rogersc6d95ad2012-08-29 14:04:53 -07001995 interlockedexchange(&malloc_global_mutex_status, (LONG)1);
Ian Rogers99908912012-08-17 17:28:15 -07001996 return;
1997 }
1998 SleepEx(0, FALSE);
1999 }
2000}
2001
2002#else /* pthreads-based locks */
2003#define MLOCK_T pthread_mutex_t
2004#define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2005#define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2006#define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2007#define INITIAL_LOCK(lk) pthread_init_lock(lk)
2008#define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2009
2010#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2011/* Cope with old-style linux recursive lock initialization by adding */
2012/* skipped internal declaration from pthread.h */
2013extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
Ian Rogersc6d95ad2012-08-29 14:04:53 -07002014 int __kind));
Ian Rogers99908912012-08-17 17:28:15 -07002015#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2016#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2017#endif /* USE_RECURSIVE_LOCKS ... */
2018
2019static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2020
2021static int pthread_init_lock (MLOCK_T *lk) {
2022 pthread_mutexattr_t attr;
2023 if (pthread_mutexattr_init(&attr)) return 1;
2024#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2025 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2026#endif
2027 if (pthread_mutex_init(lk, &attr)) return 1;
2028 if (pthread_mutexattr_destroy(&attr)) return 1;
2029 return 0;
2030}
2031
2032#endif /* ... lock types ... */
2033
2034/* Common code for all lock types */
2035#define USE_LOCK_BIT (2U)
2036
2037#ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2038#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2039#endif
2040
2041#ifndef RELEASE_MALLOC_GLOBAL_LOCK
2042#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2043#endif
2044
2045#endif /* USE_LOCKS */
2046
2047/* ----------------------- Chunk representations ------------------------ */
2048
2049/*
2050 (The following includes lightly edited explanations by Colin Plumb.)
2051
2052 The malloc_chunk declaration below is misleading (but accurate and
2053 necessary). It declares a "view" into memory allowing access to
2054 necessary fields at known offsets from a given base.
2055
2056 Chunks of memory are maintained using a `boundary tag' method as
2057 originally described by Knuth. (See the paper by Paul Wilson
2058 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2059 techniques.) Sizes of free chunks are stored both in the front of
2060 each chunk and at the end. This makes consolidating fragmented
2061 chunks into bigger chunks fast. The head fields also hold bits
2062 representing whether chunks are free or in use.
2063
2064 Here are some pictures to make it clearer. They are "exploded" to
2065 show that the state of a chunk can be thought of as extending from
2066 the high 31 bits of the head field of its header through the
2067 prev_foot and PINUSE_BIT bit of the following chunk header.
2068
2069 A chunk that's in use looks like:
2070
2071 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2072 | Size of previous chunk (if P = 0) |
2073 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2074 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2075 | Size of this chunk 1| +-+
2076 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2077 | |
2078 +- -+
2079 | |
2080 +- -+
2081 | :
2082 +- size - sizeof(size_t) available payload bytes -+
2083 : |
2084 chunk-> +- -+
2085 | |
2086 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2087 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2088 | Size of next chunk (may or may not be in use) | +-+
2089 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2090
2091 And if it's free, it looks like this:
2092
2093 chunk-> +- -+
2094 | User payload (must be in use, or we would have merged!) |
2095 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2096 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2097 | Size of this chunk 0| +-+
2098 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2099 | Next pointer |
2100 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2101 | Prev pointer |
2102 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2103 | :
2104 +- size - sizeof(struct chunk) unused bytes -+
2105 : |
2106 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2107 | Size of this chunk |
2108 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2109 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2110 | Size of next chunk (must be in use, or we would have merged)| +-+
2111 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2112 | :
2113 +- User payload -+
2114 : |
2115 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116 |0|
2117 +-+
2118 Note that since we always merge adjacent free chunks, the chunks
2119 adjacent to a free chunk must be in use.
2120
2121 Given a pointer to a chunk (which can be derived trivially from the
2122 payload pointer) we can, in O(1) time, find out whether the adjacent
2123 chunks are free, and if so, unlink them from the lists that they
2124 are on and merge them with the current chunk.
2125
2126 Chunks always begin on even word boundaries, so the mem portion
2127 (which is returned to the user) is also on an even word boundary, and
2128 thus at least double-word aligned.
2129
2130 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2131 chunk size (which is always a multiple of two words), is an in-use
2132 bit for the *previous* chunk. If that bit is *clear*, then the
2133 word before the current chunk size contains the previous chunk
2134 size, and can be used to find the front of the previous chunk.
2135 The very first chunk allocated always has this bit set, preventing
2136 access to non-existent (or non-owned) memory. If pinuse is set for
2137 any given chunk, then you CANNOT determine the size of the
2138 previous chunk, and might even get a memory addressing fault when
2139 trying to do so.
2140
2141 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2142 the chunk size redundantly records whether the current chunk is
2143 inuse (unless the chunk is mmapped). This redundancy enables usage
2144 checks within free and realloc, and reduces indirection when freeing
2145 and consolidating chunks.
2146
2147 Each freshly allocated chunk must have both cinuse and pinuse set.
2148 That is, each allocated chunk borders either a previously allocated
2149 and still in-use chunk, or the base of its memory arena. This is
2150 ensured by making all allocations from the `lowest' part of any
2151 found chunk. Further, no free chunk physically borders another one,
2152 so each free chunk is known to be preceded and followed by either
2153 inuse chunks or the ends of memory.
2154
2155 Note that the `foot' of the current chunk is actually represented
2156 as the prev_foot of the NEXT chunk. This makes it easier to
2157 deal with alignments etc but can be very confusing when trying
2158 to extend or adapt this code.
2159
2160 The exceptions to all this are
2161
2162 1. The special chunk `top' is the top-most available chunk (i.e.,
2163 the one bordering the end of available memory). It is treated
2164 specially. Top is never included in any bin, is used only if
2165 no other chunk is available, and is released back to the
2166 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2167 the top chunk is treated as larger (and thus less well
2168 fitting) than any other available chunk. The top chunk
2169 doesn't update its trailing size field since there is no next
2170 contiguous chunk that would have to index off it. However,
2171 space is still allocated for it (TOP_FOOT_SIZE) to enable
2172 separation or merging when space is extended.
2173
2174 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2175 cleared in their head fields. Because they are allocated
2176 one-by-one, each must carry its own prev_foot field, which is
2177 also used to hold the offset this chunk has within its mmapped
2178 region, which is needed to preserve alignment. Each mmapped
2179 chunk is trailed by the first two fields of a fake next-chunk
2180 for sake of usage checks.
2181
2182*/
2183
2184struct malloc_chunk {
2185 size_t prev_foot; /* Size of previous chunk (if free). */
2186 size_t head; /* Size and inuse bits. */
2187 struct malloc_chunk* fd; /* double links -- used only if free. */
2188 struct malloc_chunk* bk;
2189};
2190
2191typedef struct malloc_chunk mchunk;
2192typedef struct malloc_chunk* mchunkptr;
2193typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2194typedef unsigned int bindex_t; /* Described below */
2195typedef unsigned int binmap_t; /* Described below */
2196typedef unsigned int flag_t; /* The type of various bit flag sets */
2197
2198/* ------------------- Chunks sizes and alignments ----------------------- */
2199
2200#define MCHUNK_SIZE (sizeof(mchunk))
2201
2202#if FOOTERS
2203#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2204#else /* FOOTERS */
2205#define CHUNK_OVERHEAD (SIZE_T_SIZE)
2206#endif /* FOOTERS */
2207
2208/* MMapped chunks need a second word of overhead ... */
2209#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2210/* ... and additional padding for fake next-chunk at foot */
2211#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2212
2213/* The smallest size we can malloc is an aligned minimal chunk */
2214#define MIN_CHUNK_SIZE\
2215 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2216
2217/* conversion from malloc headers to user pointers, and back */
2218#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2219#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2220/* chunk associated with aligned address A */
2221#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2222
2223/* Bounds on request (not chunk) sizes. */
2224#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2225#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2226
2227/* pad request bytes into a usable size */
2228#define pad_request(req) \
2229 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2230
2231/* pad request, checking for minimum (but not maximum) */
2232#define request2size(req) \
2233 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2234
2235
2236/* ------------------ Operations on head and foot fields ----------------- */
2237
2238/*
2239 The head field of a chunk is or'ed with PINUSE_BIT when previous
2240 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2241 use, unless mmapped, in which case both bits are cleared.
2242
2243 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2244*/
2245
2246#define PINUSE_BIT (SIZE_T_ONE)
2247#define CINUSE_BIT (SIZE_T_TWO)
2248#define FLAG4_BIT (SIZE_T_FOUR)
2249#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2250#define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2251
2252/* Head value for fenceposts */
2253#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2254
2255/* extraction of fields from head words */
2256#define cinuse(p) ((p)->head & CINUSE_BIT)
2257#define pinuse(p) ((p)->head & PINUSE_BIT)
2258#define flag4inuse(p) ((p)->head & FLAG4_BIT)
2259#define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2260#define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2261
2262#define chunksize(p) ((p)->head & ~(FLAG_BITS))
2263
2264#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2265#define set_flag4(p) ((p)->head |= FLAG4_BIT)
2266#define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2267
2268/* Treat space at ptr +/- offset as a chunk */
2269#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2270#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2271
2272/* Ptr to next or previous physical malloc_chunk. */
2273#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2274#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2275
2276/* extract next chunk's pinuse bit */
2277#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2278
2279/* Get/set size at footer */
2280#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2281#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2282
2283/* Set size, pinuse bit, and foot */
2284#define set_size_and_pinuse_of_free_chunk(p, s)\
2285 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2286
2287/* Set size, pinuse bit, foot, and clear next pinuse */
2288#define set_free_with_pinuse(p, s, n)\
2289 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2290
2291/* Get the internal overhead associated with chunk p */
2292#define overhead_for(p)\
2293 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2294
2295/* Return true if malloced space is not necessarily cleared */
2296#if MMAP_CLEARS
2297#define calloc_must_clear(p) (!is_mmapped(p))
2298#else /* MMAP_CLEARS */
2299#define calloc_must_clear(p) (1)
2300#endif /* MMAP_CLEARS */
2301
2302/* ---------------------- Overlaid data structures ----------------------- */
2303
2304/*
2305 When chunks are not in use, they are treated as nodes of either
2306 lists or trees.
2307
2308 "Small" chunks are stored in circular doubly-linked lists, and look
2309 like this:
2310
2311 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2312 | Size of previous chunk |
2313 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2314 `head:' | Size of chunk, in bytes |P|
2315 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2316 | Forward pointer to next chunk in list |
2317 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2318 | Back pointer to previous chunk in list |
2319 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2320 | Unused space (may be 0 bytes long) .
2321 . .
2322 . |
2323nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2324 `foot:' | Size of chunk, in bytes |
2325 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2326
2327 Larger chunks are kept in a form of bitwise digital trees (aka
2328 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2329 free chunks greater than 256 bytes, their size doesn't impose any
2330 constraints on user chunk sizes. Each node looks like:
2331
2332 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2333 | Size of previous chunk |
2334 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2335 `head:' | Size of chunk, in bytes |P|
2336 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2337 | Forward pointer to next chunk of same size |
2338 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2339 | Back pointer to previous chunk of same size |
2340 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341 | Pointer to left child (child[0]) |
2342 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2343 | Pointer to right child (child[1]) |
2344 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2345 | Pointer to parent |
2346 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2347 | bin index of this chunk |
2348 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2349 | Unused space .
2350 . |
2351nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352 `foot:' | Size of chunk, in bytes |
2353 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354
2355 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2356 of the same size are arranged in a circularly-linked list, with only
2357 the oldest chunk (the next to be used, in our FIFO ordering)
2358 actually in the tree. (Tree members are distinguished by a non-null
2359 parent pointer.) If a chunk with the same size an an existing node
2360 is inserted, it is linked off the existing node using pointers that
2361 work in the same way as fd/bk pointers of small chunks.
2362
2363 Each tree contains a power of 2 sized range of chunk sizes (the
2364 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2365 tree level, with the chunks in the smaller half of the range (0x100
2366 <= x < 0x140 for the top nose) in the left subtree and the larger
2367 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2368 done by inspecting individual bits.
2369
2370 Using these rules, each node's left subtree contains all smaller
2371 sizes than its right subtree. However, the node at the root of each
2372 subtree has no particular ordering relationship to either. (The
2373 dividing line between the subtree sizes is based on trie relation.)
2374 If we remove the last chunk of a given size from the interior of the
2375 tree, we need to replace it with a leaf node. The tree ordering
2376 rules permit a node to be replaced by any leaf below it.
2377
2378 The smallest chunk in a tree (a common operation in a best-fit
2379 allocator) can be found by walking a path to the leftmost leaf in
2380 the tree. Unlike a usual binary tree, where we follow left child
2381 pointers until we reach a null, here we follow the right child
2382 pointer any time the left one is null, until we reach a leaf with
2383 both child pointers null. The smallest chunk in the tree will be
2384 somewhere along that path.
2385
2386 The worst case number of steps to add, find, or remove a node is
2387 bounded by the number of bits differentiating chunks within
2388 bins. Under current bin calculations, this ranges from 6 up to 21
2389 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2390 is of course much better.
2391*/
2392
2393struct malloc_tree_chunk {
2394 /* The first four fields must be compatible with malloc_chunk */
2395 size_t prev_foot;
2396 size_t head;
2397 struct malloc_tree_chunk* fd;
2398 struct malloc_tree_chunk* bk;
2399
2400 struct malloc_tree_chunk* child[2];
2401 struct malloc_tree_chunk* parent;
2402 bindex_t index;
2403};
2404
2405typedef struct malloc_tree_chunk tchunk;
2406typedef struct malloc_tree_chunk* tchunkptr;
2407typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2408
2409/* A little helper macro for trees */
2410#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2411
2412/* ----------------------------- Segments -------------------------------- */
2413
2414/*
2415 Each malloc space may include non-contiguous segments, held in a
2416 list headed by an embedded malloc_segment record representing the
2417 top-most space. Segments also include flags holding properties of
2418 the space. Large chunks that are directly allocated by mmap are not
2419 included in this list. They are instead independently created and
2420 destroyed without otherwise keeping track of them.
2421
2422 Segment management mainly comes into play for spaces allocated by
2423 MMAP. Any call to MMAP might or might not return memory that is
2424 adjacent to an existing segment. MORECORE normally contiguously
2425 extends the current space, so this space is almost always adjacent,
2426 which is simpler and faster to deal with. (This is why MORECORE is
2427 used preferentially to MMAP when both are available -- see
2428 sys_alloc.) When allocating using MMAP, we don't use any of the
2429 hinting mechanisms (inconsistently) supported in various
2430 implementations of unix mmap, or distinguish reserving from
2431 committing memory. Instead, we just ask for space, and exploit
2432 contiguity when we get it. It is probably possible to do
2433 better than this on some systems, but no general scheme seems
2434 to be significantly better.
2435
2436 Management entails a simpler variant of the consolidation scheme
2437 used for chunks to reduce fragmentation -- new adjacent memory is
2438 normally prepended or appended to an existing segment. However,
2439 there are limitations compared to chunk consolidation that mostly
2440 reflect the fact that segment processing is relatively infrequent
2441 (occurring only when getting memory from system) and that we
2442 don't expect to have huge numbers of segments:
2443
2444 * Segments are not indexed, so traversal requires linear scans. (It
2445 would be possible to index these, but is not worth the extra
2446 overhead and complexity for most programs on most platforms.)
2447 * New segments are only appended to old ones when holding top-most
2448 memory; if they cannot be prepended to others, they are held in
2449 different segments.
2450
2451 Except for the top-most segment of an mstate, each segment record
2452 is kept at the tail of its segment. Segments are added by pushing
2453 segment records onto the list headed by &mstate.seg for the
2454 containing mstate.
2455
2456 Segment flags control allocation/merge/deallocation policies:
2457 * If EXTERN_BIT set, then we did not allocate this segment,
2458 and so should not try to deallocate or merge with others.
2459 (This currently holds only for the initial segment passed
2460 into create_mspace_with_base.)
2461 * If USE_MMAP_BIT set, the segment may be merged with
2462 other surrounding mmapped segments and trimmed/de-allocated
2463 using munmap.
2464 * If neither bit is set, then the segment was obtained using
2465 MORECORE so can be merged with surrounding MORECORE'd segments
2466 and deallocated/trimmed using MORECORE with negative arguments.
2467*/
2468
2469struct malloc_segment {
2470 char* base; /* base address */
2471 size_t size; /* allocated size */
2472 struct malloc_segment* next; /* ptr to next segment */
2473 flag_t sflags; /* mmap and extern flag */
2474};
2475
2476#define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2477#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2478
2479typedef struct malloc_segment msegment;
2480typedef struct malloc_segment* msegmentptr;
2481
2482/* ---------------------------- malloc_state ----------------------------- */
2483
2484/*
2485 A malloc_state holds all of the bookkeeping for a space.
2486 The main fields are:
2487
2488 Top
2489 The topmost chunk of the currently active segment. Its size is
2490 cached in topsize. The actual size of topmost space is
2491 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2492 fenceposts and segment records if necessary when getting more
2493 space from the system. The size at which to autotrim top is
2494 cached from mparams in trim_check, except that it is disabled if
2495 an autotrim fails.
2496
2497 Designated victim (dv)
2498 This is the preferred chunk for servicing small requests that
2499 don't have exact fits. It is normally the chunk split off most
2500 recently to service another small request. Its size is cached in
2501 dvsize. The link fields of this chunk are not maintained since it
2502 is not kept in a bin.
2503
2504 SmallBins
2505 An array of bin headers for free chunks. These bins hold chunks
2506 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2507 chunks of all the same size, spaced 8 bytes apart. To simplify
2508 use in double-linked lists, each bin header acts as a malloc_chunk
2509 pointing to the real first node, if it exists (else pointing to
2510 itself). This avoids special-casing for headers. But to avoid
2511 waste, we allocate only the fd/bk pointers of bins, and then use
2512 repositioning tricks to treat these as the fields of a chunk.
2513
2514 TreeBins
2515 Treebins are pointers to the roots of trees holding a range of
2516 sizes. There are 2 equally spaced treebins for each power of two
2517 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2518 larger.
2519
2520 Bin maps
2521 There is one bit map for small bins ("smallmap") and one for
2522 treebins ("treemap). Each bin sets its bit when non-empty, and
2523 clears the bit when empty. Bit operations are then used to avoid
2524 bin-by-bin searching -- nearly all "search" is done without ever
2525 looking at bins that won't be selected. The bit maps
2526 conservatively use 32 bits per map word, even if on 64bit system.
2527 For a good description of some of the bit-based techniques used
2528 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2529 supplement at http://hackersdelight.org/). Many of these are
2530 intended to reduce the branchiness of paths through malloc etc, as
2531 well as to reduce the number of memory locations read or written.
2532
2533 Segments
2534 A list of segments headed by an embedded malloc_segment record
2535 representing the initial space.
2536
2537 Address check support
2538 The least_addr field is the least address ever obtained from
2539 MORECORE or MMAP. Attempted frees and reallocs of any address less
2540 than this are trapped (unless INSECURE is defined).
2541
2542 Magic tag
2543 A cross-check field that should always hold same value as mparams.magic.
2544
2545 Max allowed footprint
2546 The maximum allowed bytes to allocate from system (zero means no limit)
2547
2548 Flags
2549 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2550
2551 Statistics
2552 Each space keeps track of current and maximum system memory
2553 obtained via MORECORE or MMAP.
2554
2555 Trim support
2556 Fields holding the amount of unused topmost memory that should trigger
2557 trimming, and a counter to force periodic scanning to release unused
2558 non-topmost segments.
2559
2560 Locking
2561 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2562 around every public call using this mspace.
2563
2564 Extension support
2565 A void* pointer and a size_t field that can be used to help implement
2566 extensions to this malloc.
2567*/
2568
2569/* Bin types, widths and sizes */
2570#define NSMALLBINS (32U)
2571#define NTREEBINS (32U)
2572#define SMALLBIN_SHIFT (3U)
2573#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2574#define TREEBIN_SHIFT (8U)
2575#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2576#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2577#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2578
2579struct malloc_state {
2580 binmap_t smallmap;
2581 binmap_t treemap;
2582 size_t dvsize;
2583 size_t topsize;
2584 char* least_addr;
2585 mchunkptr dv;
2586 mchunkptr top;
2587 size_t trim_check;
2588 size_t release_checks;
2589 size_t magic;
2590 mchunkptr smallbins[(NSMALLBINS+1)*2];
2591 tbinptr treebins[NTREEBINS];
2592 size_t footprint;
2593 size_t max_footprint;
2594 size_t footprint_limit; /* zero means no limit */
2595 flag_t mflags;
2596#if USE_LOCKS
2597 MLOCK_T mutex; /* locate lock among fields that rarely change */
2598#endif /* USE_LOCKS */
2599 msegment seg;
2600 void* extp; /* Unused but available for extensions */
2601 size_t exts;
2602};
2603
2604typedef struct malloc_state* mstate;
2605
2606/* ------------- Global malloc_state and malloc_params ------------------- */
2607
2608/*
2609 malloc_params holds global properties, including those that can be
2610 dynamically set using mallopt. There is a single instance, mparams,
2611 initialized in init_mparams. Note that the non-zeroness of "magic"
2612 also serves as an initialization flag.
2613*/
2614
2615struct malloc_params {
2616 size_t magic;
2617 size_t page_size;
2618 size_t granularity;
2619 size_t mmap_threshold;
2620 size_t trim_threshold;
2621 flag_t default_mflags;
2622};
2623
2624static struct malloc_params mparams;
2625
2626/* Ensure mparams initialized */
2627#define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2628
2629#if !ONLY_MSPACES
2630
2631/* The global malloc_state used for all non-"mspace" calls */
2632static struct malloc_state _gm_;
2633#define gm (&_gm_)
2634#define is_global(M) ((M) == &_gm_)
2635
2636#endif /* !ONLY_MSPACES */
2637
2638#define is_initialized(M) ((M)->top != 0)
2639
2640/* -------------------------- system alloc setup ------------------------- */
2641
2642/* Operations on mflags */
2643
2644#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2645#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2646#if USE_LOCKS
2647#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2648#else
2649#define disable_lock(M)
2650#endif
2651
2652#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2653#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2654#if HAVE_MMAP
2655#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2656#else
2657#define disable_mmap(M)
2658#endif
2659
2660#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2661#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2662
2663#define set_lock(M,L)\
2664 ((M)->mflags = (L)?\
2665 ((M)->mflags | USE_LOCK_BIT) :\
2666 ((M)->mflags & ~USE_LOCK_BIT))
2667
2668/* page-align a size */
2669#define page_align(S)\
2670 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2671
2672/* granularity-align a size */
2673#define granularity_align(S)\
2674 (((S) + (mparams.granularity - SIZE_T_ONE))\
2675 & ~(mparams.granularity - SIZE_T_ONE))
2676
2677
2678/* For mmap, use granularity alignment on windows, else page-align */
2679#ifdef WIN32
2680#define mmap_align(S) granularity_align(S)
2681#else
2682#define mmap_align(S) page_align(S)
2683#endif
2684
2685/* For sys_alloc, enough padding to ensure can malloc request on success */
2686#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2687
2688#define is_page_aligned(S)\
2689 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2690#define is_granularity_aligned(S)\
2691 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2692
2693/* True if segment S holds address A */
2694#define segment_holds(S, A)\
2695 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2696
2697/* Return segment holding given address */
2698static msegmentptr segment_holding(mstate m, char* addr) {
2699 msegmentptr sp = &m->seg;
2700 for (;;) {
2701 if (addr >= sp->base && addr < sp->base + sp->size)
2702 return sp;
2703 if ((sp = sp->next) == 0)
2704 return 0;
2705 }
2706}
2707
2708/* Return true if segment contains a segment link */
2709static int has_segment_link(mstate m, msegmentptr ss) {
2710 msegmentptr sp = &m->seg;
2711 for (;;) {
2712 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2713 return 1;
2714 if ((sp = sp->next) == 0)
2715 return 0;
2716 }
2717}
2718
2719#ifndef MORECORE_CANNOT_TRIM
2720#define should_trim(M,s) ((s) > (M)->trim_check)
2721#else /* MORECORE_CANNOT_TRIM */
2722#define should_trim(M,s) (0)
2723#endif /* MORECORE_CANNOT_TRIM */
2724
2725/*
2726 TOP_FOOT_SIZE is padding at the end of a segment, including space
2727 that may be needed to place segment records and fenceposts when new
2728 noncontiguous segments are added.
2729*/
2730#define TOP_FOOT_SIZE\
2731 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2732
2733
2734/* ------------------------------- Hooks -------------------------------- */
2735
2736/*
2737 PREACTION should be defined to return 0 on success, and nonzero on
2738 failure. If you are not using locking, you can redefine these to do
2739 anything you like.
2740*/
2741
2742#if USE_LOCKS
2743#define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2744#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2745#else /* USE_LOCKS */
2746
2747#ifndef PREACTION
2748#define PREACTION(M) (0)
2749#endif /* PREACTION */
2750
2751#ifndef POSTACTION
2752#define POSTACTION(M)
2753#endif /* POSTACTION */
2754
2755#endif /* USE_LOCKS */
2756
2757/*
2758 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2759 USAGE_ERROR_ACTION is triggered on detected bad frees and
2760 reallocs. The argument p is an address that might have triggered the
2761 fault. It is ignored by the two predefined actions, but might be
2762 useful in custom actions that try to help diagnose errors.
2763*/
2764
2765#if PROCEED_ON_ERROR
2766
2767/* A count of the number of corruption errors causing resets */
2768int malloc_corruption_error_count;
2769
2770/* default corruption action */
2771static void reset_on_error(mstate m);
2772
2773#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2774#define USAGE_ERROR_ACTION(m, p)
2775
2776#else /* PROCEED_ON_ERROR */
2777
2778#ifndef CORRUPTION_ERROR_ACTION
2779#define CORRUPTION_ERROR_ACTION(m) ABORT
2780#endif /* CORRUPTION_ERROR_ACTION */
2781
2782#ifndef USAGE_ERROR_ACTION
2783#define USAGE_ERROR_ACTION(m,p) ABORT
2784#endif /* USAGE_ERROR_ACTION */
2785
2786#endif /* PROCEED_ON_ERROR */
2787
2788
2789/* -------------------------- Debugging setup ---------------------------- */
2790
2791#if ! DEBUG
2792
2793#define check_free_chunk(M,P)
2794#define check_inuse_chunk(M,P)
2795#define check_malloced_chunk(M,P,N)
2796#define check_mmapped_chunk(M,P)
2797#define check_malloc_state(M)
2798#define check_top_chunk(M,P)
2799
2800#else /* DEBUG */
2801#define check_free_chunk(M,P) do_check_free_chunk(M,P)
2802#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2803#define check_top_chunk(M,P) do_check_top_chunk(M,P)
2804#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2805#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2806#define check_malloc_state(M) do_check_malloc_state(M)
2807
2808static void do_check_any_chunk(mstate m, mchunkptr p);
2809static void do_check_top_chunk(mstate m, mchunkptr p);
2810static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2811static void do_check_inuse_chunk(mstate m, mchunkptr p);
2812static void do_check_free_chunk(mstate m, mchunkptr p);
2813static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2814static void do_check_tree(mstate m, tchunkptr t);
2815static void do_check_treebin(mstate m, bindex_t i);
2816static void do_check_smallbin(mstate m, bindex_t i);
2817static void do_check_malloc_state(mstate m);
2818static int bin_find(mstate m, mchunkptr x);
2819static size_t traverse_and_check(mstate m);
2820#endif /* DEBUG */
2821
2822/* ---------------------------- Indexing Bins ---------------------------- */
2823
2824#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2825#define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2826#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2827#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2828
2829/* addressing by index. See above about smallbin repositioning */
Ian Rogersc6d95ad2012-08-29 14:04:53 -07002830/* BEGIN android-changed: strict aliasing change: char* cast to void* */
2831#define smallbin_at(M, i) ((sbinptr)((void*)&((M)->smallbins[(i)<<1])))
2832/* END android-changed */
Ian Rogers99908912012-08-17 17:28:15 -07002833#define treebin_at(M,i) (&((M)->treebins[i]))
2834
2835/* assign tree index for size S to variable I. Use x86 asm if possible */
2836#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2837#define compute_tree_index(S, I)\
2838{\
2839 unsigned int X = S >> TREEBIN_SHIFT;\
2840 if (X == 0)\
2841 I = 0;\
2842 else if (X > 0xFFFF)\
2843 I = NTREEBINS-1;\
2844 else {\
2845 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2846 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2847 }\
2848}
2849
2850#elif defined (__INTEL_COMPILER)
2851#define compute_tree_index(S, I)\
2852{\
2853 size_t X = S >> TREEBIN_SHIFT;\
2854 if (X == 0)\
2855 I = 0;\
2856 else if (X > 0xFFFF)\
2857 I = NTREEBINS-1;\
2858 else {\
2859 unsigned int K = _bit_scan_reverse (X); \
2860 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2861 }\
2862}
2863
2864#elif defined(_MSC_VER) && _MSC_VER>=1300
2865#define compute_tree_index(S, I)\
2866{\
2867 size_t X = S >> TREEBIN_SHIFT;\
2868 if (X == 0)\
2869 I = 0;\
2870 else if (X > 0xFFFF)\
2871 I = NTREEBINS-1;\
2872 else {\
2873 unsigned int K;\
2874 _BitScanReverse((DWORD *) &K, (DWORD) X);\
2875 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2876 }\
2877}
2878
2879#else /* GNUC */
2880#define compute_tree_index(S, I)\
2881{\
2882 size_t X = S >> TREEBIN_SHIFT;\
2883 if (X == 0)\
2884 I = 0;\
2885 else if (X > 0xFFFF)\
2886 I = NTREEBINS-1;\
2887 else {\
2888 unsigned int Y = (unsigned int)X;\
2889 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2890 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2891 N += K;\
2892 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2893 K = 14 - N + ((Y <<= K) >> 15);\
2894 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2895 }\
2896}
2897#endif /* GNUC */
2898
2899/* Bit representing maximum resolved size in a treebin at i */
2900#define bit_for_tree_index(i) \
2901 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2902
2903/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2904#define leftshift_for_tree_index(i) \
2905 ((i == NTREEBINS-1)? 0 : \
2906 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2907
2908/* The size of the smallest chunk held in bin with index i */
2909#define minsize_for_tree_index(i) \
2910 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2911 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2912
2913
2914/* ------------------------ Operations on bin maps ----------------------- */
2915
2916/* bit corresponding to given index */
2917#define idx2bit(i) ((binmap_t)(1) << (i))
2918
2919/* Mark/Clear bits with given index */
2920#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2921#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2922#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2923
2924#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2925#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2926#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2927
2928/* isolate the least set bit of a bitmap */
2929#define least_bit(x) ((x) & -(x))
2930
2931/* mask with all bits to left of least bit of x on */
2932#define left_bits(x) ((x<<1) | -(x<<1))
2933
2934/* mask with all bits to left of or equal to least bit of x on */
2935#define same_or_left_bits(x) ((x) | -(x))
2936
2937/* index corresponding to given bit. Use x86 asm if possible */
2938
2939#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2940#define compute_bit2idx(X, I)\
2941{\
2942 unsigned int J;\
2943 J = __builtin_ctz(X); \
2944 I = (bindex_t)J;\
2945}
2946
2947#elif defined (__INTEL_COMPILER)
2948#define compute_bit2idx(X, I)\
2949{\
2950 unsigned int J;\
2951 J = _bit_scan_forward (X); \
2952 I = (bindex_t)J;\
2953}
2954
2955#elif defined(_MSC_VER) && _MSC_VER>=1300
2956#define compute_bit2idx(X, I)\
2957{\
2958 unsigned int J;\
2959 _BitScanForward((DWORD *) &J, X);\
2960 I = (bindex_t)J;\
2961}
2962
2963#elif USE_BUILTIN_FFS
2964#define compute_bit2idx(X, I) I = ffs(X)-1
2965
2966#else
2967#define compute_bit2idx(X, I)\
2968{\
2969 unsigned int Y = X - 1;\
2970 unsigned int K = Y >> (16-4) & 16;\
2971 unsigned int N = K; Y >>= K;\
2972 N += K = Y >> (8-3) & 8; Y >>= K;\
2973 N += K = Y >> (4-2) & 4; Y >>= K;\
2974 N += K = Y >> (2-1) & 2; Y >>= K;\
2975 N += K = Y >> (1-0) & 1; Y >>= K;\
2976 I = (bindex_t)(N + Y);\
2977}
2978#endif /* GNUC */
2979
2980
2981/* ----------------------- Runtime Check Support ------------------------- */
2982
2983/*
2984 For security, the main invariant is that malloc/free/etc never
2985 writes to a static address other than malloc_state, unless static
2986 malloc_state itself has been corrupted, which cannot occur via
2987 malloc (because of these checks). In essence this means that we
2988 believe all pointers, sizes, maps etc held in malloc_state, but
2989 check all of those linked or offsetted from other embedded data
2990 structures. These checks are interspersed with main code in a way
2991 that tends to minimize their run-time cost.
2992
2993 When FOOTERS is defined, in addition to range checking, we also
2994 verify footer fields of inuse chunks, which can be used guarantee
2995 that the mstate controlling malloc/free is intact. This is a
2996 streamlined version of the approach described by William Robertson
2997 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2998 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2999 of an inuse chunk holds the xor of its mstate and a random seed,
3000 that is checked upon calls to free() and realloc(). This is
3001 (probabalistically) unguessable from outside the program, but can be
3002 computed by any code successfully malloc'ing any chunk, so does not
3003 itself provide protection against code that has already broken
3004 security through some other means. Unlike Robertson et al, we
3005 always dynamically check addresses of all offset chunks (previous,
3006 next, etc). This turns out to be cheaper than relying on hashes.
3007*/
3008
3009#if !INSECURE
3010/* Check if address a is at least as high as any from MORECORE or MMAP */
3011#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3012/* Check if address of next chunk n is higher than base chunk p */
3013#define ok_next(p, n) ((char*)(p) < (char*)(n))
3014/* Check if p has inuse status */
3015#define ok_inuse(p) is_inuse(p)
3016/* Check if p has its pinuse bit on */
3017#define ok_pinuse(p) pinuse(p)
3018
3019#else /* !INSECURE */
3020#define ok_address(M, a) (1)
3021#define ok_next(b, n) (1)
3022#define ok_inuse(p) (1)
3023#define ok_pinuse(p) (1)
3024#endif /* !INSECURE */
3025
3026#if (FOOTERS && !INSECURE)
3027/* Check if (alleged) mstate m has expected magic field */
3028#define ok_magic(M) ((M)->magic == mparams.magic)
3029#else /* (FOOTERS && !INSECURE) */
3030#define ok_magic(M) (1)
3031#endif /* (FOOTERS && !INSECURE) */
3032
3033/* In gcc, use __builtin_expect to minimize impact of checks */
3034#if !INSECURE
3035#if defined(__GNUC__) && __GNUC__ >= 3
3036#define RTCHECK(e) __builtin_expect(e, 1)
3037#else /* GNUC */
3038#define RTCHECK(e) (e)
3039#endif /* GNUC */
3040#else /* !INSECURE */
3041#define RTCHECK(e) (1)
3042#endif /* !INSECURE */
3043
3044/* macros to set up inuse chunks with or without footers */
3045
3046#if !FOOTERS
3047
3048#define mark_inuse_foot(M,p,s)
3049
3050/* Macros for setting head/foot of non-mmapped chunks */
3051
3052/* Set cinuse bit and pinuse bit of next chunk */
3053#define set_inuse(M,p,s)\
3054 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3055 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3056
3057/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3058#define set_inuse_and_pinuse(M,p,s)\
3059 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3060 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3061
3062/* Set size, cinuse and pinuse bit of this chunk */
3063#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3064 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3065
3066#else /* FOOTERS */
3067
3068/* Set foot of inuse chunk to be xor of mstate and seed */
3069#define mark_inuse_foot(M,p,s)\
3070 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3071
3072#define get_mstate_for(p)\
3073 ((mstate)(((mchunkptr)((char*)(p) +\
3074 (chunksize(p))))->prev_foot ^ mparams.magic))
3075
3076#define set_inuse(M,p,s)\
3077 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3078 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3079 mark_inuse_foot(M,p,s))
3080
3081#define set_inuse_and_pinuse(M,p,s)\
3082 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3083 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3084 mark_inuse_foot(M,p,s))
3085
3086#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3087 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3088 mark_inuse_foot(M, p, s))
3089
3090#endif /* !FOOTERS */
3091
3092/* ---------------------------- setting mparams -------------------------- */
3093
Ian Rogersc6d95ad2012-08-29 14:04:53 -07003094#if LOCK_AT_FORK
3095static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }
3096static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
3097static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }
3098#endif /* LOCK_AT_FORK */
3099
Ian Rogers99908912012-08-17 17:28:15 -07003100/* Initialize mparams */
3101static int init_mparams(void) {
3102#ifdef NEED_GLOBAL_LOCK_INIT
3103 if (malloc_global_mutex_status <= 0)
3104 init_malloc_global_mutex();
3105#endif
3106
3107 ACQUIRE_MALLOC_GLOBAL_LOCK();
3108 if (mparams.magic == 0) {
3109 size_t magic;
3110 size_t psize;
3111 size_t gsize;
3112
3113#ifndef WIN32
3114 psize = malloc_getpagesize;
3115 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3116#else /* WIN32 */
3117 {
3118 SYSTEM_INFO system_info;
3119 GetSystemInfo(&system_info);
3120 psize = system_info.dwPageSize;
3121 gsize = ((DEFAULT_GRANULARITY != 0)?
3122 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3123 }
3124#endif /* WIN32 */
3125
3126 /* Sanity-check configuration:
3127 size_t must be unsigned and as wide as pointer type.
3128 ints must be at least 4 bytes.
3129 alignment must be at least 8.
3130 Alignment, min chunk size, and page size must all be powers of 2.
3131 */
3132 if ((sizeof(size_t) != sizeof(char*)) ||
3133 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3134 (sizeof(int) < 4) ||
3135 (MALLOC_ALIGNMENT < (size_t)8U) ||
3136 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3137 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3138 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3139 ((psize & (psize-SIZE_T_ONE)) != 0))
3140 ABORT;
Ian Rogers99908912012-08-17 17:28:15 -07003141 mparams.granularity = gsize;
3142 mparams.page_size = psize;
3143 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3144 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3145#if MORECORE_CONTIGUOUS
3146 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3147#else /* MORECORE_CONTIGUOUS */
3148 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3149#endif /* MORECORE_CONTIGUOUS */
3150
3151#if !ONLY_MSPACES
3152 /* Set up lock for main malloc area */
3153 gm->mflags = mparams.default_mflags;
3154 (void)INITIAL_LOCK(&gm->mutex);
3155#endif
Ian Rogersc6d95ad2012-08-29 14:04:53 -07003156#if LOCK_AT_FORK
3157 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3158#endif
Ian Rogers99908912012-08-17 17:28:15 -07003159
3160 {
3161#if USE_DEV_RANDOM
3162 int fd;
3163 unsigned char buf[sizeof(size_t)];
3164 /* Try to use /dev/urandom, else fall back on using time */
3165 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3166 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3167 magic = *((size_t *) buf);
3168 close(fd);
3169 }
3170 else
3171#endif /* USE_DEV_RANDOM */
3172#ifdef WIN32
Ian Rogersc6d95ad2012-08-29 14:04:53 -07003173 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
Ian Rogers99908912012-08-17 17:28:15 -07003174#elif defined(LACKS_TIME_H)
3175 magic = (size_t)&magic ^ (size_t)0x55555555U;
3176#else
Ian Rogersc6d95ad2012-08-29 14:04:53 -07003177 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
Ian Rogers99908912012-08-17 17:28:15 -07003178#endif
3179 magic |= (size_t)8U; /* ensure nonzero */
3180 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3181 /* Until memory modes commonly available, use volatile-write */
3182 (*(volatile size_t *)(&(mparams.magic))) = magic;
3183 }
3184 }
3185
3186 RELEASE_MALLOC_GLOBAL_LOCK();
3187 return 1;
3188}
3189
3190/* support for mallopt */
3191static int change_mparam(int param_number, int value) {
3192 size_t val;
3193 ensure_initialization();
3194 val = (value == -1)? MAX_SIZE_T : (size_t)value;
3195 switch(param_number) {
3196 case M_TRIM_THRESHOLD:
3197 mparams.trim_threshold = val;
3198 return 1;
3199 case M_GRANULARITY:
3200 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3201 mparams.granularity = val;
3202 return 1;
3203 }
3204 else
3205 return 0;
3206 case M_MMAP_THRESHOLD:
3207 mparams.mmap_threshold = val;
3208 return 1;
3209 default:
3210 return 0;
3211 }
3212}
3213
3214#if DEBUG
3215/* ------------------------- Debugging Support --------------------------- */
3216
3217/* Check properties of any chunk, whether free, inuse, mmapped etc */
3218static void do_check_any_chunk(mstate m, mchunkptr p) {
3219 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3220 assert(ok_address(m, p));
3221}
3222
3223/* Check properties of top chunk */
3224static void do_check_top_chunk(mstate m, mchunkptr p) {
3225 msegmentptr sp = segment_holding(m, (char*)p);
3226 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3227 assert(sp != 0);
3228 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3229 assert(ok_address(m, p));
3230 assert(sz == m->topsize);
3231 assert(sz > 0);
3232 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3233 assert(pinuse(p));
3234 assert(!pinuse(chunk_plus_offset(p, sz)));
3235}
3236
3237/* Check properties of (inuse) mmapped chunks */
3238static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3239 size_t sz = chunksize(p);
3240 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3241 assert(is_mmapped(p));
3242 assert(use_mmap(m));
3243 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3244 assert(ok_address(m, p));
3245 assert(!is_small(sz));
3246 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3247 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3248 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3249}
3250
3251/* Check properties of inuse chunks */
3252static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3253 do_check_any_chunk(m, p);
3254 assert(is_inuse(p));
3255 assert(next_pinuse(p));
3256 /* If not pinuse and not mmapped, previous chunk has OK offset */
3257 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3258 if (is_mmapped(p))
3259 do_check_mmapped_chunk(m, p);
3260}
3261
3262/* Check properties of free chunks */
3263static void do_check_free_chunk(mstate m, mchunkptr p) {
3264 size_t sz = chunksize(p);
3265 mchunkptr next = chunk_plus_offset(p, sz);
3266 do_check_any_chunk(m, p);
3267 assert(!is_inuse(p));
3268 assert(!next_pinuse(p));
3269 assert (!is_mmapped(p));
3270 if (p != m->dv && p != m->top) {
3271 if (sz >= MIN_CHUNK_SIZE) {
3272 assert((sz & CHUNK_ALIGN_MASK) == 0);
3273 assert(is_aligned(chunk2mem(p)));
3274 assert(next->prev_foot == sz);
3275 assert(pinuse(p));
3276 assert (next == m->top || is_inuse(next));
3277 assert(p->fd->bk == p);
3278 assert(p->bk->fd == p);
3279 }
3280 else /* markers are always of size SIZE_T_SIZE */
3281 assert(sz == SIZE_T_SIZE);
3282 }
3283}
3284
3285/* Check properties of malloced chunks at the point they are malloced */
3286static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3287 if (mem != 0) {
3288 mchunkptr p = mem2chunk(mem);
3289 size_t sz = p->head & ~INUSE_BITS;
3290 do_check_inuse_chunk(m, p);
3291 assert((sz & CHUNK_ALIGN_MASK) == 0);
3292 assert(sz >= MIN_CHUNK_SIZE);
3293 assert(sz >= s);
3294 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3295 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3296 }
3297}
3298
3299/* Check a tree and its subtrees. */
3300static void do_check_tree(mstate m, tchunkptr t) {
3301 tchunkptr head = 0;
3302 tchunkptr u = t;
3303 bindex_t tindex = t->index;
3304 size_t tsize = chunksize(t);
3305 bindex_t idx;
3306 compute_tree_index(tsize, idx);
3307 assert(tindex == idx);
3308 assert(tsize >= MIN_LARGE_SIZE);
3309 assert(tsize >= minsize_for_tree_index(idx));
3310 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3311
3312 do { /* traverse through chain of same-sized nodes */
3313 do_check_any_chunk(m, ((mchunkptr)u));
3314 assert(u->index == tindex);
3315 assert(chunksize(u) == tsize);
3316 assert(!is_inuse(u));
3317 assert(!next_pinuse(u));
3318 assert(u->fd->bk == u);
3319 assert(u->bk->fd == u);
3320 if (u->parent == 0) {
3321 assert(u->child[0] == 0);
3322 assert(u->child[1] == 0);
3323 }
3324 else {
3325 assert(head == 0); /* only one node on chain has parent */
3326 head = u;
3327 assert(u->parent != u);
3328 assert (u->parent->child[0] == u ||
3329 u->parent->child[1] == u ||
3330 *((tbinptr*)(u->parent)) == u);
3331 if (u->child[0] != 0) {
3332 assert(u->child[0]->parent == u);
3333 assert(u->child[0] != u);
3334 do_check_tree(m, u->child[0]);
3335 }
3336 if (u->child[1] != 0) {
3337 assert(u->child[1]->parent == u);
3338 assert(u->child[1] != u);
3339 do_check_tree(m, u->child[1]);
3340 }
3341 if (u->child[0] != 0 && u->child[1] != 0) {
3342 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3343 }
3344 }
3345 u = u->fd;
3346 } while (u != t);
3347 assert(head != 0);
3348}
3349
3350/* Check all the chunks in a treebin. */
3351static void do_check_treebin(mstate m, bindex_t i) {
3352 tbinptr* tb = treebin_at(m, i);
3353 tchunkptr t = *tb;
3354 int empty = (m->treemap & (1U << i)) == 0;
3355 if (t == 0)
3356 assert(empty);
3357 if (!empty)
3358 do_check_tree(m, t);
3359}
3360
3361/* Check all the chunks in a smallbin. */
3362static void do_check_smallbin(mstate m, bindex_t i) {
3363 sbinptr b = smallbin_at(m, i);
3364 mchunkptr p = b->bk;
3365 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3366 if (p == b)
3367 assert(empty);
3368 if (!empty) {
3369 for (; p != b; p = p->bk) {
3370 size_t size = chunksize(p);
3371 mchunkptr q;
3372 /* each chunk claims to be free */
3373 do_check_free_chunk(m, p);
3374 /* chunk belongs in bin */
3375 assert(small_index(size) == i);
3376 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3377 /* chunk is followed by an inuse chunk */
3378 q = next_chunk(p);
3379 if (q->head != FENCEPOST_HEAD)
3380 do_check_inuse_chunk(m, q);
3381 }
3382 }
3383}
3384
3385/* Find x in a bin. Used in other check functions. */
3386static int bin_find(mstate m, mchunkptr x) {
3387 size_t size = chunksize(x);
3388 if (is_small(size)) {
3389 bindex_t sidx = small_index(size);
3390 sbinptr b = smallbin_at(m, sidx);
3391 if (smallmap_is_marked(m, sidx)) {
3392 mchunkptr p = b;
3393 do {
3394 if (p == x)
3395 return 1;
3396 } while ((p = p->fd) != b);
3397 }
3398 }
3399 else {
3400 bindex_t tidx;
3401 compute_tree_index(size, tidx);
3402 if (treemap_is_marked(m, tidx)) {
3403 tchunkptr t = *treebin_at(m, tidx);
3404 size_t sizebits = size << leftshift_for_tree_index(tidx);
3405 while (t != 0 && chunksize(t) != size) {
3406 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3407 sizebits <<= 1;
3408 }
3409 if (t != 0) {
3410 tchunkptr u = t;
3411 do {
3412 if (u == (tchunkptr)x)
3413 return 1;
3414 } while ((u = u->fd) != t);
3415 }
3416 }
3417 }
3418 return 0;
3419}
3420
3421/* Traverse each chunk and check it; return total */
3422static size_t traverse_and_check(mstate m) {
3423 size_t sum = 0;
3424 if (is_initialized(m)) {
3425 msegmentptr s = &m->seg;
3426 sum += m->topsize + TOP_FOOT_SIZE;
3427 while (s != 0) {
3428 mchunkptr q = align_as_chunk(s->base);
3429 mchunkptr lastq = 0;
3430 assert(pinuse(q));
3431 while (segment_holds(s, q) &&
3432 q != m->top && q->head != FENCEPOST_HEAD) {
3433 sum += chunksize(q);
3434 if (is_inuse(q)) {
3435 assert(!bin_find(m, q));
3436 do_check_inuse_chunk(m, q);
3437 }
3438 else {
3439 assert(q == m->dv || bin_find(m, q));
3440 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3441 do_check_free_chunk(m, q);
3442 }
3443 lastq = q;
3444 q = next_chunk(q);
3445 }
3446 s = s->next;
3447 }
3448 }
3449 return sum;
3450}
3451
3452
3453/* Check all properties of malloc_state. */
3454static void do_check_malloc_state(mstate m) {
3455 bindex_t i;
3456 size_t total;
3457 /* check bins */
3458 for (i = 0; i < NSMALLBINS; ++i)
3459 do_check_smallbin(m, i);
3460 for (i = 0; i < NTREEBINS; ++i)
3461 do_check_treebin(m, i);
3462
3463 if (m->dvsize != 0) { /* check dv chunk */
3464 do_check_any_chunk(m, m->dv);
3465 assert(m->dvsize == chunksize(m->dv));
3466 assert(m->dvsize >= MIN_CHUNK_SIZE);
3467 assert(bin_find(m, m->dv) == 0);
3468 }
3469
3470 if (m->top != 0) { /* check top chunk */
3471 do_check_top_chunk(m, m->top);
3472 /*assert(m->topsize == chunksize(m->top)); redundant */
3473 assert(m->topsize > 0);
3474 assert(bin_find(m, m->top) == 0);
3475 }
3476
3477 total = traverse_and_check(m);
3478 assert(total <= m->footprint);
3479 assert(m->footprint <= m->max_footprint);
3480}
3481#endif /* DEBUG */
3482
3483/* ----------------------------- statistics ------------------------------ */
3484
3485#if !NO_MALLINFO
3486static struct mallinfo internal_mallinfo(mstate m) {
3487 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3488 ensure_initialization();
3489 if (!PREACTION(m)) {
3490 check_malloc_state(m);
3491 if (is_initialized(m)) {
3492 size_t nfree = SIZE_T_ONE; /* top always free */
3493 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3494 size_t sum = mfree;
3495 msegmentptr s = &m->seg;
3496 while (s != 0) {
3497 mchunkptr q = align_as_chunk(s->base);
3498 while (segment_holds(s, q) &&
3499 q != m->top && q->head != FENCEPOST_HEAD) {
3500 size_t sz = chunksize(q);
3501 sum += sz;
3502 if (!is_inuse(q)) {
3503 mfree += sz;
3504 ++nfree;
3505 }
3506 q = next_chunk(q);
3507 }
3508 s = s->next;
3509 }
3510
3511 nm.arena = sum;
3512 nm.ordblks = nfree;
3513 nm.hblkhd = m->footprint - sum;
3514 nm.usmblks = m->max_footprint;
3515 nm.uordblks = m->footprint - mfree;
3516 nm.fordblks = mfree;
3517 nm.keepcost = m->topsize;
3518 }
3519
3520 POSTACTION(m);
3521 }
3522 return nm;
3523}
3524#endif /* !NO_MALLINFO */
3525
3526#if !NO_MALLOC_STATS
3527static void internal_malloc_stats(mstate m) {
3528 ensure_initialization();
3529 if (!PREACTION(m)) {
3530 size_t maxfp = 0;
3531 size_t fp = 0;
3532 size_t used = 0;
3533 check_malloc_state(m);
3534 if (is_initialized(m)) {
3535 msegmentptr s = &m->seg;
3536 maxfp = m->max_footprint;
3537 fp = m->footprint;
3538 used = fp - (m->topsize + TOP_FOOT_SIZE);
3539
3540 while (s != 0) {
3541 mchunkptr q = align_as_chunk(s->base);
3542 while (segment_holds(s, q) &&
3543 q != m->top && q->head != FENCEPOST_HEAD) {
3544 if (!is_inuse(q))
3545 used -= chunksize(q);
3546 q = next_chunk(q);
3547 }
3548 s = s->next;
3549 }
3550 }
3551 POSTACTION(m); /* drop lock */
3552 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3553 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3554 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3555 }
3556}
3557#endif /* NO_MALLOC_STATS */
3558
3559/* ----------------------- Operations on smallbins ----------------------- */
3560
3561/*
3562 Various forms of linking and unlinking are defined as macros. Even
3563 the ones for trees, which are very long but have very short typical
3564 paths. This is ugly but reduces reliance on inlining support of
3565 compilers.
3566*/
3567
3568/* Link a free chunk into a smallbin */
3569#define insert_small_chunk(M, P, S) {\
3570 bindex_t I = small_index(S);\
3571 mchunkptr B = smallbin_at(M, I);\
3572 mchunkptr F = B;\
3573 assert(S >= MIN_CHUNK_SIZE);\
3574 if (!smallmap_is_marked(M, I))\
3575 mark_smallmap(M, I);\
3576 else if (RTCHECK(ok_address(M, B->fd)))\
3577 F = B->fd;\
3578 else {\
3579 CORRUPTION_ERROR_ACTION(M);\
3580 }\
3581 B->fd = P;\
3582 F->bk = P;\
3583 P->fd = F;\
3584 P->bk = B;\
3585}
3586
3587/* Unlink a chunk from a smallbin */
3588#define unlink_small_chunk(M, P, S) {\
3589 mchunkptr F = P->fd;\
3590 mchunkptr B = P->bk;\
3591 bindex_t I = small_index(S);\
3592 assert(P != B);\
3593 assert(P != F);\
3594 assert(chunksize(P) == small_index2size(I));\
3595 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3596 if (B == F) {\
3597 clear_smallmap(M, I);\
3598 }\
3599 else if (RTCHECK(B == smallbin_at(M,I) ||\
3600 (ok_address(M, B) && B->fd == P))) {\
3601 F->bk = B;\
3602 B->fd = F;\
3603 }\
3604 else {\
3605 CORRUPTION_ERROR_ACTION(M);\
3606 }\
3607 }\
3608 else {\
3609 CORRUPTION_ERROR_ACTION(M);\
3610 }\
3611}
3612
3613/* Unlink the first chunk from a smallbin */
3614#define unlink_first_small_chunk(M, B, P, I) {\
3615 mchunkptr F = P->fd;\
3616 assert(P != B);\
3617 assert(P != F);\
3618 assert(chunksize(P) == small_index2size(I));\
3619 if (B == F) {\
3620 clear_smallmap(M, I);\
3621 }\
3622 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3623 F->bk = B;\
3624 B->fd = F;\
3625 }\
3626 else {\
3627 CORRUPTION_ERROR_ACTION(M);\
3628 }\
3629}
3630
3631/* Replace dv node, binning the old one */
3632/* Used only when dvsize known to be small */
3633#define replace_dv(M, P, S) {\
3634 size_t DVS = M->dvsize;\
3635 assert(is_small(DVS));\
3636 if (DVS != 0) {\
3637 mchunkptr DV = M->dv;\
3638 insert_small_chunk(M, DV, DVS);\
3639 }\
3640 M->dvsize = S;\
3641 M->dv = P;\
3642}
3643
3644/* ------------------------- Operations on trees ------------------------- */
3645
3646/* Insert chunk into tree */
3647#define insert_large_chunk(M, X, S) {\
3648 tbinptr* H;\
3649 bindex_t I;\
3650 compute_tree_index(S, I);\
3651 H = treebin_at(M, I);\
3652 X->index = I;\
3653 X->child[0] = X->child[1] = 0;\
3654 if (!treemap_is_marked(M, I)) {\
3655 mark_treemap(M, I);\
3656 *H = X;\
3657 X->parent = (tchunkptr)H;\
3658 X->fd = X->bk = X;\
3659 }\
3660 else {\
3661 tchunkptr T = *H;\
3662 size_t K = S << leftshift_for_tree_index(I);\
3663 for (;;) {\
3664 if (chunksize(T) != S) {\
3665 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3666 K <<= 1;\
3667 if (*C != 0)\
3668 T = *C;\
3669 else if (RTCHECK(ok_address(M, C))) {\
3670 *C = X;\
3671 X->parent = T;\
3672 X->fd = X->bk = X;\
3673 break;\
3674 }\
3675 else {\
3676 CORRUPTION_ERROR_ACTION(M);\
3677 break;\
3678 }\
3679 }\
3680 else {\
3681 tchunkptr F = T->fd;\
3682 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3683 T->fd = F->bk = X;\
3684 X->fd = F;\
3685 X->bk = T;\
3686 X->parent = 0;\
3687 break;\
3688 }\
3689 else {\
3690 CORRUPTION_ERROR_ACTION(M);\
3691 break;\
3692 }\
3693 }\
3694 }\
3695 }\
3696}
3697
3698/*
3699 Unlink steps:
3700
3701 1. If x is a chained node, unlink it from its same-sized fd/bk links
3702 and choose its bk node as its replacement.
3703 2. If x was the last node of its size, but not a leaf node, it must
3704 be replaced with a leaf node (not merely one with an open left or
3705 right), to make sure that lefts and rights of descendents
3706 correspond properly to bit masks. We use the rightmost descendent
3707 of x. We could use any other leaf, but this is easy to locate and
3708 tends to counteract removal of leftmosts elsewhere, and so keeps
3709 paths shorter than minimally guaranteed. This doesn't loop much
3710 because on average a node in a tree is near the bottom.
3711 3. If x is the base of a chain (i.e., has parent links) relink
3712 x's parent and children to x's replacement (or null if none).
3713*/
3714
3715#define unlink_large_chunk(M, X) {\
3716 tchunkptr XP = X->parent;\
3717 tchunkptr R;\
3718 if (X->bk != X) {\
3719 tchunkptr F = X->fd;\
3720 R = X->bk;\
3721 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3722 F->bk = R;\
3723 R->fd = F;\
3724 }\
3725 else {\
3726 CORRUPTION_ERROR_ACTION(M);\
3727 }\
3728 }\
3729 else {\
3730 tchunkptr* RP;\
3731 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3732 ((R = *(RP = &(X->child[0]))) != 0)) {\
3733 tchunkptr* CP;\
3734 while ((*(CP = &(R->child[1])) != 0) ||\
3735 (*(CP = &(R->child[0])) != 0)) {\
3736 R = *(RP = CP);\
3737 }\
3738 if (RTCHECK(ok_address(M, RP)))\
3739 *RP = 0;\
3740 else {\
3741 CORRUPTION_ERROR_ACTION(M);\
3742 }\
3743 }\
3744 }\
3745 if (XP != 0) {\
3746 tbinptr* H = treebin_at(M, X->index);\
3747 if (X == *H) {\
3748 if ((*H = R) == 0) \
3749 clear_treemap(M, X->index);\
3750 }\
3751 else if (RTCHECK(ok_address(M, XP))) {\
3752 if (XP->child[0] == X) \
3753 XP->child[0] = R;\
3754 else \
3755 XP->child[1] = R;\
3756 }\
3757 else\
3758 CORRUPTION_ERROR_ACTION(M);\
3759 if (R != 0) {\
3760 if (RTCHECK(ok_address(M, R))) {\
3761 tchunkptr C0, C1;\
3762 R->parent = XP;\
3763 if ((C0 = X->child[0]) != 0) {\
3764 if (RTCHECK(ok_address(M, C0))) {\
3765 R->child[0] = C0;\
3766 C0->parent = R;\
3767 }\
3768 else\
3769 CORRUPTION_ERROR_ACTION(M);\
3770 }\
3771 if ((C1 = X->child[1]) != 0) {\
3772 if (RTCHECK(ok_address(M, C1))) {\
3773 R->child[1] = C1;\
3774 C1->parent = R;\
3775 }\
3776 else\
3777 CORRUPTION_ERROR_ACTION(M);\
3778 }\
3779 }\
3780 else\
3781 CORRUPTION_ERROR_ACTION(M);\
3782 }\
3783 }\
3784}
3785
3786/* Relays to large vs small bin operations */
3787
3788#define insert_chunk(M, P, S)\
3789 if (is_small(S)) insert_small_chunk(M, P, S)\
3790 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3791
3792#define unlink_chunk(M, P, S)\
3793 if (is_small(S)) unlink_small_chunk(M, P, S)\
3794 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3795
3796
3797/* Relays to internal calls to malloc/free from realloc, memalign etc */
3798
3799#if ONLY_MSPACES
3800#define internal_malloc(m, b) mspace_malloc(m, b)
3801#define internal_free(m, mem) mspace_free(m,mem);
3802#else /* ONLY_MSPACES */
3803#if MSPACES
3804#define internal_malloc(m, b)\
3805 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3806#define internal_free(m, mem)\
3807 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3808#else /* MSPACES */
3809#define internal_malloc(m, b) dlmalloc(b)
3810#define internal_free(m, mem) dlfree(mem)
3811#endif /* MSPACES */
3812#endif /* ONLY_MSPACES */
3813
3814/* ----------------------- Direct-mmapping chunks ----------------------- */
3815
3816/*
3817 Directly mmapped chunks are set up with an offset to the start of
3818 the mmapped region stored in the prev_foot field of the chunk. This
3819 allows reconstruction of the required argument to MUNMAP when freed,
3820 and also allows adjustment of the returned chunk to meet alignment
3821 requirements (especially in memalign).
3822*/
3823
3824/* Malloc using mmap */
3825static void* mmap_alloc(mstate m, size_t nb) {
3826 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3827 if (m->footprint_limit != 0) {
3828 size_t fp = m->footprint + mmsize;
3829 if (fp <= m->footprint || fp > m->footprint_limit)
3830 return 0;
3831 }
3832 if (mmsize > nb) { /* Check for wrap around 0 */
3833 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3834 if (mm != CMFAIL) {
3835 size_t offset = align_offset(chunk2mem(mm));
3836 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3837 mchunkptr p = (mchunkptr)(mm + offset);
3838 p->prev_foot = offset;
3839 p->head = psize;
3840 mark_inuse_foot(m, p, psize);
3841 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3842 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3843
3844 if (m->least_addr == 0 || mm < m->least_addr)
3845 m->least_addr = mm;
3846 if ((m->footprint += mmsize) > m->max_footprint)
3847 m->max_footprint = m->footprint;
3848 assert(is_aligned(chunk2mem(p)));
3849 check_mmapped_chunk(m, p);
3850 return chunk2mem(p);
3851 }
3852 }
3853 return 0;
3854}
3855
3856/* Realloc using mmap */
3857static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3858 size_t oldsize = chunksize(oldp);
Ian Rogers99908912012-08-17 17:28:15 -07003859 (void)flags; /* placate people compiling -Wunused */
Ian Rogers99908912012-08-17 17:28:15 -07003860 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3861 return 0;
3862 /* Keep old chunk if big enough but not too big */
3863 if (oldsize >= nb + SIZE_T_SIZE &&
3864 (oldsize - nb) <= (mparams.granularity << 1))
3865 return oldp;
3866 else {
3867 size_t offset = oldp->prev_foot;
3868 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3869 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3870 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3871 oldmmsize, newmmsize, flags);
3872 if (cp != CMFAIL) {
3873 mchunkptr newp = (mchunkptr)(cp + offset);
3874 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3875 newp->head = psize;
3876 mark_inuse_foot(m, newp, psize);
3877 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3878 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3879
3880 if (cp < m->least_addr)
3881 m->least_addr = cp;
3882 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3883 m->max_footprint = m->footprint;
3884 check_mmapped_chunk(m, newp);
3885 return newp;
3886 }
3887 }
3888 return 0;
3889}
3890
3891
3892/* -------------------------- mspace management -------------------------- */
3893
3894/* Initialize top chunk and its size */
3895static void init_top(mstate m, mchunkptr p, size_t psize) {
3896 /* Ensure alignment */
3897 size_t offset = align_offset(chunk2mem(p));
3898 p = (mchunkptr)((char*)p + offset);
3899 psize -= offset;
3900
3901 m->top = p;
3902 m->topsize = psize;
3903 p->head = psize | PINUSE_BIT;
3904 /* set size of fake trailing chunk holding overhead space only once */
3905 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3906 m->trim_check = mparams.trim_threshold; /* reset on each update */
3907}
3908
3909/* Initialize bins for a new mstate that is otherwise zeroed out */
3910static void init_bins(mstate m) {
3911 /* Establish circular links for smallbins */
3912 bindex_t i;
3913 for (i = 0; i < NSMALLBINS; ++i) {
3914 sbinptr bin = smallbin_at(m,i);
3915 bin->fd = bin->bk = bin;
3916 }
3917}
3918
3919#if PROCEED_ON_ERROR
3920
3921/* default corruption action */
3922static void reset_on_error(mstate m) {
3923 int i;
3924 ++malloc_corruption_error_count;
3925 /* Reinitialize fields to forget about all memory */
3926 m->smallmap = m->treemap = 0;
3927 m->dvsize = m->topsize = 0;
3928 m->seg.base = 0;
3929 m->seg.size = 0;
3930 m->seg.next = 0;
3931 m->top = m->dv = 0;
3932 for (i = 0; i < NTREEBINS; ++i)
3933 *treebin_at(m, i) = 0;
3934 init_bins(m);
3935}
3936#endif /* PROCEED_ON_ERROR */
3937
3938/* Allocate chunk and prepend remainder with chunk in successor base. */
3939static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3940 size_t nb) {
3941 mchunkptr p = align_as_chunk(newbase);
3942 mchunkptr oldfirst = align_as_chunk(oldbase);
3943 size_t psize = (char*)oldfirst - (char*)p;
3944 mchunkptr q = chunk_plus_offset(p, nb);
3945 size_t qsize = psize - nb;
3946 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3947
3948 assert((char*)oldfirst > (char*)q);
3949 assert(pinuse(oldfirst));
3950 assert(qsize >= MIN_CHUNK_SIZE);
3951
3952 /* consolidate remainder with first chunk of old base */
3953 if (oldfirst == m->top) {
3954 size_t tsize = m->topsize += qsize;
3955 m->top = q;
3956 q->head = tsize | PINUSE_BIT;
3957 check_top_chunk(m, q);
3958 }
3959 else if (oldfirst == m->dv) {
3960 size_t dsize = m->dvsize += qsize;
3961 m->dv = q;
3962 set_size_and_pinuse_of_free_chunk(q, dsize);
3963 }
3964 else {
3965 if (!is_inuse(oldfirst)) {
3966 size_t nsize = chunksize(oldfirst);
3967 unlink_chunk(m, oldfirst, nsize);
3968 oldfirst = chunk_plus_offset(oldfirst, nsize);
3969 qsize += nsize;
3970 }
3971 set_free_with_pinuse(q, qsize, oldfirst);
3972 insert_chunk(m, q, qsize);
3973 check_free_chunk(m, q);
3974 }
3975
3976 check_malloced_chunk(m, chunk2mem(p), nb);
3977 return chunk2mem(p);
3978}
3979
3980/* Add a segment to hold a new noncontiguous region */
3981static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3982 /* Determine locations and sizes of segment, fenceposts, old top */
3983 char* old_top = (char*)m->top;
3984 msegmentptr oldsp = segment_holding(m, old_top);
3985 char* old_end = oldsp->base + oldsp->size;
3986 size_t ssize = pad_request(sizeof(struct malloc_segment));
3987 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3988 size_t offset = align_offset(chunk2mem(rawsp));
3989 char* asp = rawsp + offset;
3990 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3991 mchunkptr sp = (mchunkptr)csp;
3992 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3993 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3994 mchunkptr p = tnext;
3995 int nfences = 0;
3996
3997 /* reset top to new space */
3998 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3999
4000 /* Set up segment record */
4001 assert(is_aligned(ss));
4002 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4003 *ss = m->seg; /* Push current record */
4004 m->seg.base = tbase;
4005 m->seg.size = tsize;
4006 m->seg.sflags = mmapped;
4007 m->seg.next = ss;
4008
4009 /* Insert trailing fenceposts */
4010 for (;;) {
4011 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4012 p->head = FENCEPOST_HEAD;
4013 ++nfences;
4014 if ((char*)(&(nextp->head)) < old_end)
4015 p = nextp;
4016 else
4017 break;
4018 }
4019 assert(nfences >= 2);
4020
4021 /* Insert the rest of old top into a bin as an ordinary free chunk */
4022 if (csp != old_top) {
4023 mchunkptr q = (mchunkptr)old_top;
4024 size_t psize = csp - old_top;
4025 mchunkptr tn = chunk_plus_offset(q, psize);
4026 set_free_with_pinuse(q, psize, tn);
4027 insert_chunk(m, q, psize);
4028 }
4029
4030 check_top_chunk(m, m->top);
4031}
4032
4033/* -------------------------- System allocation -------------------------- */
4034
4035/* Get memory from system using MORECORE or MMAP */
4036static void* sys_alloc(mstate m, size_t nb) {
4037 char* tbase = CMFAIL;
4038 size_t tsize = 0;
4039 flag_t mmap_flag = 0;
4040 size_t asize; /* allocation size */
4041
4042 ensure_initialization();
4043
4044 /* Directly map large chunks, but only if already initialized */
4045 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4046 void* mem = mmap_alloc(m, nb);
4047 if (mem != 0)
4048 return mem;
4049 }
4050
4051 asize = granularity_align(nb + SYS_ALLOC_PADDING);
4052 if (asize <= nb)
4053 return 0; /* wraparound */
4054 if (m->footprint_limit != 0) {
4055 size_t fp = m->footprint + asize;
4056 if (fp <= m->footprint || fp > m->footprint_limit)
4057 return 0;
4058 }
4059
4060 /*
4061 Try getting memory in any of three ways (in most-preferred to
4062 least-preferred order):
4063 1. A call to MORECORE that can normally contiguously extend memory.
4064 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4065 or main space is mmapped or a previous contiguous call failed)
4066 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4067 Note that under the default settings, if MORECORE is unable to
4068 fulfill a request, and HAVE_MMAP is true, then mmap is
4069 used as a noncontiguous system allocator. This is a useful backup
4070 strategy for systems with holes in address spaces -- in this case
4071 sbrk cannot contiguously expand the heap, but mmap may be able to
4072 find space.
4073 3. A call to MORECORE that cannot usually contiguously extend memory.
4074 (disabled if not HAVE_MORECORE)
4075
4076 In all cases, we need to request enough bytes from system to ensure
4077 we can malloc nb bytes upon success, so pad with enough space for
4078 top_foot, plus alignment-pad to make sure we don't lose bytes if
4079 not on boundary, and round this up to a granularity unit.
4080 */
4081
4082 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4083 char* br = CMFAIL;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004084 size_t ssize = asize; /* sbrk call size */
Ian Rogers99908912012-08-17 17:28:15 -07004085 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4086 ACQUIRE_MALLOC_GLOBAL_LOCK();
4087
4088 if (ss == 0) { /* First time through or recovery */
4089 char* base = (char*)CALL_MORECORE(0);
4090 if (base != CMFAIL) {
4091 size_t fp;
4092 /* Adjust to end on a page boundary */
4093 if (!is_page_aligned(base))
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004094 ssize += (page_align((size_t)base) - (size_t)base);
4095 fp = m->footprint + ssize; /* recheck limits */
4096 if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
Ian Rogers99908912012-08-17 17:28:15 -07004097 (m->footprint_limit == 0 ||
4098 (fp > m->footprint && fp <= m->footprint_limit)) &&
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004099 (br = (char*)(CALL_MORECORE(ssize))) == base) {
Ian Rogers99908912012-08-17 17:28:15 -07004100 tbase = base;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004101 tsize = ssize;
Ian Rogers99908912012-08-17 17:28:15 -07004102 }
4103 }
4104 }
4105 else {
4106 /* Subtract out existing available top space from MORECORE request. */
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004107 ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
Ian Rogers99908912012-08-17 17:28:15 -07004108 /* Use mem here only if it did continuously extend old space */
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004109 if (ssize < HALF_MAX_SIZE_T &&
4110 (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
Ian Rogers99908912012-08-17 17:28:15 -07004111 tbase = br;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004112 tsize = ssize;
Ian Rogers99908912012-08-17 17:28:15 -07004113 }
4114 }
4115
4116 if (tbase == CMFAIL) { /* Cope with partial failure */
4117 if (br != CMFAIL) { /* Try to use/extend the space we did get */
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004118 if (ssize < HALF_MAX_SIZE_T &&
4119 ssize < nb + SYS_ALLOC_PADDING) {
4120 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
Ian Rogers99908912012-08-17 17:28:15 -07004121 if (esize < HALF_MAX_SIZE_T) {
4122 char* end = (char*)CALL_MORECORE(esize);
4123 if (end != CMFAIL)
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004124 ssize += esize;
Ian Rogers99908912012-08-17 17:28:15 -07004125 else { /* Can't use; try to release */
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004126 (void) CALL_MORECORE(-ssize);
Ian Rogers99908912012-08-17 17:28:15 -07004127 br = CMFAIL;
4128 }
4129 }
4130 }
4131 }
4132 if (br != CMFAIL) { /* Use the space we did get */
4133 tbase = br;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004134 tsize = ssize;
Ian Rogers99908912012-08-17 17:28:15 -07004135 }
4136 else
4137 disable_contiguous(m); /* Don't try contiguous path in the future */
4138 }
4139
4140 RELEASE_MALLOC_GLOBAL_LOCK();
4141 }
4142
4143 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4144 char* mp = (char*)(CALL_MMAP(asize));
4145 if (mp != CMFAIL) {
4146 tbase = mp;
4147 tsize = asize;
4148 mmap_flag = USE_MMAP_BIT;
4149 }
4150 }
4151
4152 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4153 if (asize < HALF_MAX_SIZE_T) {
4154 char* br = CMFAIL;
4155 char* end = CMFAIL;
4156 ACQUIRE_MALLOC_GLOBAL_LOCK();
4157 br = (char*)(CALL_MORECORE(asize));
4158 end = (char*)(CALL_MORECORE(0));
4159 RELEASE_MALLOC_GLOBAL_LOCK();
4160 if (br != CMFAIL && end != CMFAIL && br < end) {
4161 size_t ssize = end - br;
4162 if (ssize > nb + TOP_FOOT_SIZE) {
4163 tbase = br;
4164 tsize = ssize;
4165 }
4166 }
4167 }
4168 }
4169
4170 if (tbase != CMFAIL) {
4171
4172 if ((m->footprint += tsize) > m->max_footprint)
4173 m->max_footprint = m->footprint;
4174
4175 if (!is_initialized(m)) { /* first-time initialization */
4176 if (m->least_addr == 0 || tbase < m->least_addr)
4177 m->least_addr = tbase;
4178 m->seg.base = tbase;
4179 m->seg.size = tsize;
4180 m->seg.sflags = mmap_flag;
4181 m->magic = mparams.magic;
4182 m->release_checks = MAX_RELEASE_CHECK_RATE;
4183 init_bins(m);
4184#if !ONLY_MSPACES
4185 if (is_global(m))
4186 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4187 else
4188#endif
4189 {
4190 /* Offset top by embedded malloc_state */
4191 mchunkptr mn = next_chunk(mem2chunk(m));
4192 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4193 }
4194 }
4195
4196 else {
4197 /* Try to merge with an existing segment */
4198 msegmentptr sp = &m->seg;
4199 /* Only consider most recent segment if traversal suppressed */
4200 while (sp != 0 && tbase != sp->base + sp->size)
4201 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4202 if (sp != 0 &&
4203 !is_extern_segment(sp) &&
4204 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4205 segment_holds(sp, m->top)) { /* append */
4206 sp->size += tsize;
4207 init_top(m, m->top, m->topsize + tsize);
4208 }
4209 else {
4210 if (tbase < m->least_addr)
4211 m->least_addr = tbase;
4212 sp = &m->seg;
4213 while (sp != 0 && sp->base != tbase + tsize)
4214 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4215 if (sp != 0 &&
4216 !is_extern_segment(sp) &&
4217 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4218 char* oldbase = sp->base;
4219 sp->base = tbase;
4220 sp->size += tsize;
4221 return prepend_alloc(m, tbase, oldbase, nb);
4222 }
4223 else
4224 add_segment(m, tbase, tsize, mmap_flag);
4225 }
4226 }
4227
4228 if (nb < m->topsize) { /* Allocate from new or extended top space */
4229 size_t rsize = m->topsize -= nb;
4230 mchunkptr p = m->top;
4231 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4232 r->head = rsize | PINUSE_BIT;
4233 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4234 check_top_chunk(m, m->top);
4235 check_malloced_chunk(m, chunk2mem(p), nb);
4236 return chunk2mem(p);
4237 }
4238 }
4239
4240 MALLOC_FAILURE_ACTION;
4241 return 0;
4242}
4243
4244/* ----------------------- system deallocation -------------------------- */
4245
4246/* Unmap and unlink any mmapped segments that don't contain used chunks */
4247static size_t release_unused_segments(mstate m) {
4248 size_t released = 0;
4249 int nsegs = 0;
4250 msegmentptr pred = &m->seg;
4251 msegmentptr sp = pred->next;
4252 while (sp != 0) {
4253 char* base = sp->base;
4254 size_t size = sp->size;
4255 msegmentptr next = sp->next;
4256 ++nsegs;
4257 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4258 mchunkptr p = align_as_chunk(base);
4259 size_t psize = chunksize(p);
4260 /* Can unmap if first chunk holds entire segment and not pinned */
4261 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4262 tchunkptr tp = (tchunkptr)p;
4263 assert(segment_holds(sp, (char*)sp));
4264 if (p == m->dv) {
4265 m->dv = 0;
4266 m->dvsize = 0;
4267 }
4268 else {
4269 unlink_large_chunk(m, tp);
4270 }
4271 if (CALL_MUNMAP(base, size) == 0) {
4272 released += size;
4273 m->footprint -= size;
4274 /* unlink obsoleted record */
4275 sp = pred;
4276 sp->next = next;
4277 }
4278 else { /* back out if cannot unmap */
4279 insert_large_chunk(m, tp, psize);
4280 }
4281 }
4282 }
4283 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4284 break;
4285 pred = sp;
4286 sp = next;
4287 }
4288 /* Reset check counter */
Ian Rogers99908912012-08-17 17:28:15 -07004289 m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
Ian Rogersc6d95ad2012-08-29 14:04:53 -07004290 (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
Ian Rogers99908912012-08-17 17:28:15 -07004291 return released;
4292}
4293
4294static int sys_trim(mstate m, size_t pad) {
4295 size_t released = 0;
4296 ensure_initialization();
4297 if (pad < MAX_REQUEST && is_initialized(m)) {
4298 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4299
4300 if (m->topsize > pad) {
4301 /* Shrink top space in granularity-size units, keeping at least one */
4302 size_t unit = mparams.granularity;
4303 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4304 SIZE_T_ONE) * unit;
4305 msegmentptr sp = segment_holding(m, (char*)m->top);
4306
4307 if (!is_extern_segment(sp)) {
4308 if (is_mmapped_segment(sp)) {
4309 if (HAVE_MMAP &&
4310 sp->size >= extra &&
4311 !has_segment_link(m, sp)) { /* can't shrink if pinned */
4312 size_t newsize = sp->size - extra;
Ian Rogers99908912012-08-17 17:28:15 -07004313 (void)newsize; /* placate people compiling -Wunused-variable */
Ian Rogers99908912012-08-17 17:28:15 -07004314 /* Prefer mremap, fall back to munmap */
4315 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4316 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4317 released = extra;
4318 }
4319 }
4320 }
4321 else if (HAVE_MORECORE) {
4322 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4323 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4324 ACQUIRE_MALLOC_GLOBAL_LOCK();
4325 {
4326 /* Make sure end of memory is where we last set it. */
4327 char* old_br = (char*)(CALL_MORECORE(0));
4328 if (old_br == sp->base + sp->size) {
4329 char* rel_br = (char*)(CALL_MORECORE(-extra));
4330 char* new_br = (char*)(CALL_MORECORE(0));
4331 if (rel_br != CMFAIL && new_br < old_br)
4332 released = old_br - new_br;
4333 }
4334 }
4335 RELEASE_MALLOC_GLOBAL_LOCK();
4336 }
4337 }
4338
4339 if (released != 0) {
4340 sp->size -= released;
4341 m->footprint -= released;
4342 init_top(m, m->top, m->topsize - released);
4343 check_top_chunk(m, m->top);
4344 }
4345 }
4346
4347 /* Unmap any unused mmapped segments */
4348 if (HAVE_MMAP)
4349 released += release_unused_segments(m);
4350
4351 /* On failure, disable autotrim to avoid repeated failed future calls */
4352 if (released == 0 && m->topsize > m->trim_check)
4353 m->trim_check = MAX_SIZE_T;
4354 }
4355
4356 return (released != 0)? 1 : 0;
4357}
4358
4359/* Consolidate and bin a chunk. Differs from exported versions
4360 of free mainly in that the chunk need not be marked as inuse.
4361*/
4362static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4363 mchunkptr next = chunk_plus_offset(p, psize);
4364 if (!pinuse(p)) {
4365 mchunkptr prev;
4366 size_t prevsize = p->prev_foot;
4367 if (is_mmapped(p)) {
4368 psize += prevsize + MMAP_FOOT_PAD;
4369 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4370 m->footprint -= psize;
4371 return;
4372 }
4373 prev = chunk_minus_offset(p, prevsize);
4374 psize += prevsize;
4375 p = prev;
4376 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4377 if (p != m->dv) {
4378 unlink_chunk(m, p, prevsize);
4379 }
4380 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4381 m->dvsize = psize;
4382 set_free_with_pinuse(p, psize, next);
4383 return;
4384 }
4385 }
4386 else {
4387 CORRUPTION_ERROR_ACTION(m);
4388 return;
4389 }
4390 }
4391 if (RTCHECK(ok_address(m, next))) {
4392 if (!cinuse(next)) { /* consolidate forward */
4393 if (next == m->top) {
4394 size_t tsize = m->topsize += psize;
4395 m->top = p;
4396 p->head = tsize | PINUSE_BIT;
4397 if (p == m->dv) {
4398 m->dv = 0;
4399 m->dvsize = 0;
4400 }
4401 return;
4402 }
4403 else if (next == m->dv) {
4404 size_t dsize = m->dvsize += psize;
4405 m->dv = p;
4406 set_size_and_pinuse_of_free_chunk(p, dsize);
4407 return;
4408 }
4409 else {
4410 size_t nsize = chunksize(next);
4411 psize += nsize;
4412 unlink_chunk(m, next, nsize);
4413 set_size_and_pinuse_of_free_chunk(p, psize);
4414 if (p == m->dv) {
4415 m->dvsize = psize;
4416 return;
4417 }
4418 }
4419 }
4420 else {
4421 set_free_with_pinuse(p, psize, next);
4422 }
4423 insert_chunk(m, p, psize);
4424 }
4425 else {
4426 CORRUPTION_ERROR_ACTION(m);
4427 }
4428}
4429
4430/* ---------------------------- malloc --------------------------- */
4431
4432/* allocate a large request from the best fitting chunk in a treebin */
4433static void* tmalloc_large(mstate m, size_t nb) {
4434 tchunkptr v = 0;
4435 size_t rsize = -nb; /* Unsigned negation */
4436 tchunkptr t;
4437 bindex_t idx;
4438 compute_tree_index(nb, idx);
4439 if ((t = *treebin_at(m, idx)) != 0) {
4440 /* Traverse tree for this bin looking for node with size == nb */
4441 size_t sizebits = nb << leftshift_for_tree_index(idx);
4442 tchunkptr rst = 0; /* The deepest untaken right subtree */
4443 for (;;) {
4444 tchunkptr rt;
4445 size_t trem = chunksize(t) - nb;
4446 if (trem < rsize) {
4447 v = t;
4448 if ((rsize = trem) == 0)
4449 break;
4450 }
4451 rt = t->child[1];
4452 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4453 if (rt != 0 && rt != t)
4454 rst = rt;
4455 if (t == 0) {
4456 t = rst; /* set t to least subtree holding sizes > nb */
4457 break;
4458 }
4459 sizebits <<= 1;
4460 }
4461 }
4462 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4463 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4464 if (leftbits != 0) {
4465 bindex_t i;
4466 binmap_t leastbit = least_bit(leftbits);
4467 compute_bit2idx(leastbit, i);
4468 t = *treebin_at(m, i);
4469 }
4470 }
4471
4472 while (t != 0) { /* find smallest of tree or subtree */
4473 size_t trem = chunksize(t) - nb;
4474 if (trem < rsize) {
4475 rsize = trem;
4476 v = t;
4477 }
4478 t = leftmost_child(t);
4479 }
4480
4481 /* If dv is a better fit, return 0 so malloc will use it */
4482 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4483 if (RTCHECK(ok_address(m, v))) { /* split */
4484 mchunkptr r = chunk_plus_offset(v, nb);
4485 assert(chunksize(v) == rsize + nb);
4486 if (RTCHECK(ok_next(v, r))) {
4487 unlink_large_chunk(m, v);
4488 if (rsize < MIN_CHUNK_SIZE)
4489 set_inuse_and_pinuse(m, v, (rsize + nb));
4490 else {
4491 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4492 set_size_and_pinuse_of_free_chunk(r, rsize);
4493 insert_chunk(m, r, rsize);
4494 }
4495 return chunk2mem(v);
4496 }
4497 }
4498 CORRUPTION_ERROR_ACTION(m);
4499 }
4500 return 0;
4501}
4502
4503/* allocate a small request from the best fitting chunk in a treebin */
4504static void* tmalloc_small(mstate m, size_t nb) {
4505 tchunkptr t, v;
4506 size_t rsize;
4507 bindex_t i;
4508 binmap_t leastbit = least_bit(m->treemap);
4509 compute_bit2idx(leastbit, i);
4510 v = t = *treebin_at(m, i);
4511 rsize = chunksize(t) - nb;
4512
4513 while ((t = leftmost_child(t)) != 0) {
4514 size_t trem = chunksize(t) - nb;
4515 if (trem < rsize) {
4516 rsize = trem;
4517 v = t;
4518 }
4519 }
4520
4521 if (RTCHECK(ok_address(m, v))) {
4522 mchunkptr r = chunk_plus_offset(v, nb);
4523 assert(chunksize(v) == rsize + nb);
4524 if (RTCHECK(ok_next(v, r))) {
4525 unlink_large_chunk(m, v);
4526 if (rsize < MIN_CHUNK_SIZE)
4527 set_inuse_and_pinuse(m, v, (rsize + nb));
4528 else {
4529 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4530 set_size_and_pinuse_of_free_chunk(r, rsize);
4531 replace_dv(m, r, rsize);
4532 }
4533 return chunk2mem(v);
4534 }
4535 }
4536
4537 CORRUPTION_ERROR_ACTION(m);
4538 return 0;
4539}
4540
4541#if !ONLY_MSPACES
4542
4543void* dlmalloc(size_t bytes) {
4544 /*
4545 Basic algorithm:
4546 If a small request (< 256 bytes minus per-chunk overhead):
4547 1. If one exists, use a remainderless chunk in associated smallbin.
4548 (Remainderless means that there are too few excess bytes to
4549 represent as a chunk.)
4550 2. If it is big enough, use the dv chunk, which is normally the
4551 chunk adjacent to the one used for the most recent small request.
4552 3. If one exists, split the smallest available chunk in a bin,
4553 saving remainder in dv.
4554 4. If it is big enough, use the top chunk.
4555 5. If available, get memory from system and use it
4556 Otherwise, for a large request:
4557 1. Find the smallest available binned chunk that fits, and use it
4558 if it is better fitting than dv chunk, splitting if necessary.
4559 2. If better fitting than any binned chunk, use the dv chunk.
4560 3. If it is big enough, use the top chunk.
4561 4. If request size >= mmap threshold, try to directly mmap this chunk.
4562 5. If available, get memory from system and use it
4563
4564 The ugly goto's here ensure that postaction occurs along all paths.
4565 */
4566
4567#if USE_LOCKS
4568 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4569#endif
4570
4571 if (!PREACTION(gm)) {
4572 void* mem;
4573 size_t nb;
4574 if (bytes <= MAX_SMALL_REQUEST) {
4575 bindex_t idx;
4576 binmap_t smallbits;
4577 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4578 idx = small_index(nb);
4579 smallbits = gm->smallmap >> idx;
4580
4581 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4582 mchunkptr b, p;
4583 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4584 b = smallbin_at(gm, idx);
4585 p = b->fd;
4586 assert(chunksize(p) == small_index2size(idx));
4587 unlink_first_small_chunk(gm, b, p, idx);
4588 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4589 mem = chunk2mem(p);
4590 check_malloced_chunk(gm, mem, nb);
4591 goto postaction;
4592 }
4593
4594 else if (nb > gm->dvsize) {
4595 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4596 mchunkptr b, p, r;
4597 size_t rsize;
4598 bindex_t i;
4599 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4600 binmap_t leastbit = least_bit(leftbits);
4601 compute_bit2idx(leastbit, i);
4602 b = smallbin_at(gm, i);
4603 p = b->fd;
4604 assert(chunksize(p) == small_index2size(i));
4605 unlink_first_small_chunk(gm, b, p, i);
4606 rsize = small_index2size(i) - nb;
4607 /* Fit here cannot be remainderless if 4byte sizes */
4608 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4609 set_inuse_and_pinuse(gm, p, small_index2size(i));
4610 else {
4611 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4612 r = chunk_plus_offset(p, nb);
4613 set_size_and_pinuse_of_free_chunk(r, rsize);
4614 replace_dv(gm, r, rsize);
4615 }
4616 mem = chunk2mem(p);
4617 check_malloced_chunk(gm, mem, nb);
4618 goto postaction;
4619 }
4620
4621 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4622 check_malloced_chunk(gm, mem, nb);
4623 goto postaction;
4624 }
4625 }
4626 }
4627 else if (bytes >= MAX_REQUEST)
4628 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4629 else {
4630 nb = pad_request(bytes);
4631 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4632 check_malloced_chunk(gm, mem, nb);
4633 goto postaction;
4634 }
4635 }
4636
4637 if (nb <= gm->dvsize) {
4638 size_t rsize = gm->dvsize - nb;
4639 mchunkptr p = gm->dv;
4640 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4641 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4642 gm->dvsize = rsize;
4643 set_size_and_pinuse_of_free_chunk(r, rsize);
4644 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4645 }
4646 else { /* exhaust dv */
4647 size_t dvs = gm->dvsize;
4648 gm->dvsize = 0;
4649 gm->dv = 0;
4650 set_inuse_and_pinuse(gm, p, dvs);
4651 }
4652 mem = chunk2mem(p);
4653 check_malloced_chunk(gm, mem, nb);
4654 goto postaction;
4655 }
4656
4657 else if (nb < gm->topsize) { /* Split top */
4658 size_t rsize = gm->topsize -= nb;
4659 mchunkptr p = gm->top;
4660 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4661 r->head = rsize | PINUSE_BIT;
4662 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4663 mem = chunk2mem(p);
4664 check_top_chunk(gm, gm->top);
4665 check_malloced_chunk(gm, mem, nb);
4666 goto postaction;
4667 }
4668
4669 mem = sys_alloc(gm, nb);
4670
4671 postaction:
4672 POSTACTION(gm);
4673 return mem;
4674 }
4675
4676 return 0;
4677}
4678
4679/* ---------------------------- free --------------------------- */
4680
4681void dlfree(void* mem) {
4682 /*
4683 Consolidate freed chunks with preceeding or succeeding bordering
4684 free chunks, if they exist, and then place in a bin. Intermixed
4685 with special cases for top, dv, mmapped chunks, and usage errors.
4686 */
4687
4688 if (mem != 0) {
4689 mchunkptr p = mem2chunk(mem);
4690#if FOOTERS
4691 mstate fm = get_mstate_for(p);
4692 if (!ok_magic(fm)) {
4693 USAGE_ERROR_ACTION(fm, p);
4694 return;
4695 }
4696#else /* FOOTERS */
4697#define fm gm
4698#endif /* FOOTERS */
4699 if (!PREACTION(fm)) {
4700 check_inuse_chunk(fm, p);
4701 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4702 size_t psize = chunksize(p);
4703 mchunkptr next = chunk_plus_offset(p, psize);
4704 if (!pinuse(p)) {
4705 size_t prevsize = p->prev_foot;
4706 if (is_mmapped(p)) {
4707 psize += prevsize + MMAP_FOOT_PAD;
4708 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4709 fm->footprint -= psize;
4710 goto postaction;
4711 }
4712 else {
4713 mchunkptr prev = chunk_minus_offset(p, prevsize);
4714 psize += prevsize;
4715 p = prev;
4716 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4717 if (p != fm->dv) {
4718 unlink_chunk(fm, p, prevsize);
4719 }
4720 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4721 fm->dvsize = psize;
4722 set_free_with_pinuse(p, psize, next);
4723 goto postaction;
4724 }
4725 }
4726 else
4727 goto erroraction;
4728 }
4729 }
4730
4731 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4732 if (!cinuse(next)) { /* consolidate forward */
4733 if (next == fm->top) {
4734 size_t tsize = fm->topsize += psize;
4735 fm->top = p;
4736 p->head = tsize | PINUSE_BIT;
4737 if (p == fm->dv) {
4738 fm->dv = 0;
4739 fm->dvsize = 0;
4740 }
4741 if (should_trim(fm, tsize))
4742 sys_trim(fm, 0);
4743 goto postaction;
4744 }
4745 else if (next == fm->dv) {
4746 size_t dsize = fm->dvsize += psize;
4747 fm->dv = p;
4748 set_size_and_pinuse_of_free_chunk(p, dsize);
4749 goto postaction;
4750 }
4751 else {
4752 size_t nsize = chunksize(next);
4753 psize += nsize;
4754 unlink_chunk(fm, next, nsize);
4755 set_size_and_pinuse_of_free_chunk(p, psize);
4756 if (p == fm->dv) {
4757 fm->dvsize = psize;
4758 goto postaction;
4759 }
4760 }
4761 }
4762 else
4763 set_free_with_pinuse(p, psize, next);
4764
4765 if (is_small(psize)) {
4766 insert_small_chunk(fm, p, psize);
4767 check_free_chunk(fm, p);
4768 }
4769 else {
4770 tchunkptr tp = (tchunkptr)p;
4771 insert_large_chunk(fm, tp, psize);
4772 check_free_chunk(fm, p);
4773 if (--fm->release_checks == 0)
4774 release_unused_segments(fm);
4775 }
4776 goto postaction;
4777 }
4778 }
4779 erroraction:
4780 USAGE_ERROR_ACTION(fm, p);
4781 postaction:
4782 POSTACTION(fm);
4783 }
4784 }
4785#if !FOOTERS
4786#undef fm
4787#endif /* FOOTERS */
4788}
4789
4790void* dlcalloc(size_t n_elements, size_t elem_size) {
4791 void* mem;
4792 size_t req = 0;
4793 if (n_elements != 0) {
4794 req = n_elements * elem_size;
4795 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4796 (req / n_elements != elem_size))
4797 req = MAX_SIZE_T; /* force downstream failure on overflow */
4798 }
4799 mem = dlmalloc(req);
4800 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4801 memset(mem, 0, req);
4802 return mem;
4803}
4804
4805#endif /* !ONLY_MSPACES */
4806
4807/* ------------ Internal support for realloc, memalign, etc -------------- */
4808
4809/* Try to realloc; only in-place unless can_move true */
4810static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4811 int can_move) {
4812 mchunkptr newp = 0;
4813 size_t oldsize = chunksize(p);
4814 mchunkptr next = chunk_plus_offset(p, oldsize);
4815 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4816 ok_next(p, next) && ok_pinuse(next))) {
4817 if (is_mmapped(p)) {
4818 newp = mmap_resize(m, p, nb, can_move);
4819 }
4820 else if (oldsize >= nb) { /* already big enough */
4821 size_t rsize = oldsize - nb;
4822 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4823 mchunkptr r = chunk_plus_offset(p, nb);
4824 set_inuse(m, p, nb);
4825 set_inuse(m, r, rsize);
4826 dispose_chunk(m, r, rsize);
4827 }
4828 newp = p;
4829 }
4830 else if (next == m->top) { /* extend into top */
4831 if (oldsize + m->topsize > nb) {
4832 size_t newsize = oldsize + m->topsize;
4833 size_t newtopsize = newsize - nb;
4834 mchunkptr newtop = chunk_plus_offset(p, nb);
4835 set_inuse(m, p, nb);
4836 newtop->head = newtopsize |PINUSE_BIT;
4837 m->top = newtop;
4838 m->topsize = newtopsize;
4839 newp = p;
4840 }
4841 }
4842 else if (next == m->dv) { /* extend into dv */
4843 size_t dvs = m->dvsize;
4844 if (oldsize + dvs >= nb) {
4845 size_t dsize = oldsize + dvs - nb;
4846 if (dsize >= MIN_CHUNK_SIZE) {
4847 mchunkptr r = chunk_plus_offset(p, nb);
4848 mchunkptr n = chunk_plus_offset(r, dsize);
4849 set_inuse(m, p, nb);
4850 set_size_and_pinuse_of_free_chunk(r, dsize);
4851 clear_pinuse(n);
4852 m->dvsize = dsize;
4853 m->dv = r;
4854 }
4855 else { /* exhaust dv */
4856 size_t newsize = oldsize + dvs;
4857 set_inuse(m, p, newsize);
4858 m->dvsize = 0;
4859 m->dv = 0;
4860 }
4861 newp = p;
4862 }
4863 }
4864 else if (!cinuse(next)) { /* extend into next free chunk */
4865 size_t nextsize = chunksize(next);
4866 if (oldsize + nextsize >= nb) {
4867 size_t rsize = oldsize + nextsize - nb;
4868 unlink_chunk(m, next, nextsize);
4869 if (rsize < MIN_CHUNK_SIZE) {
4870 size_t newsize = oldsize + nextsize;
4871 set_inuse(m, p, newsize);
4872 }
4873 else {
4874 mchunkptr r = chunk_plus_offset(p, nb);
4875 set_inuse(m, p, nb);
4876 set_inuse(m, r, rsize);
4877 dispose_chunk(m, r, rsize);
4878 }
4879 newp = p;
4880 }
4881 }
4882 }
4883 else {
Ian Rogers99908912012-08-17 17:28:15 -07004884 USAGE_ERROR_ACTION(m, chunk2mem(p));
Ian Rogers99908912012-08-17 17:28:15 -07004885 }
4886 return newp;
4887}
4888
4889static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4890 void* mem = 0;
4891 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4892 alignment = MIN_CHUNK_SIZE;
4893 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4894 size_t a = MALLOC_ALIGNMENT << 1;
4895 while (a < alignment) a <<= 1;
4896 alignment = a;
4897 }
4898 if (bytes >= MAX_REQUEST - alignment) {
4899 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4900 MALLOC_FAILURE_ACTION;
4901 }
4902 }
4903 else {
4904 size_t nb = request2size(bytes);
4905 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4906 mem = internal_malloc(m, req);
4907 if (mem != 0) {
4908 mchunkptr p = mem2chunk(mem);
4909 if (PREACTION(m))
4910 return 0;
4911 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4912 /*
4913 Find an aligned spot inside chunk. Since we need to give
4914 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4915 the first calculation places us at a spot with less than
4916 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4917 We've allocated enough total room so that this is always
4918 possible.
4919 */
4920 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4921 SIZE_T_ONE)) &
4922 -alignment));
4923 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4924 br : br+alignment;
4925 mchunkptr newp = (mchunkptr)pos;
4926 size_t leadsize = pos - (char*)(p);
4927 size_t newsize = chunksize(p) - leadsize;
4928
4929 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4930 newp->prev_foot = p->prev_foot + leadsize;
4931 newp->head = newsize;
4932 }
4933 else { /* Otherwise, give back leader, use the rest */
4934 set_inuse(m, newp, newsize);
4935 set_inuse(m, p, leadsize);
4936 dispose_chunk(m, p, leadsize);
4937 }
4938 p = newp;
4939 }
4940
4941 /* Give back spare room at the end */
4942 if (!is_mmapped(p)) {
4943 size_t size = chunksize(p);
4944 if (size > nb + MIN_CHUNK_SIZE) {
4945 size_t remainder_size = size - nb;
4946 mchunkptr remainder = chunk_plus_offset(p, nb);
4947 set_inuse(m, p, nb);
4948 set_inuse(m, remainder, remainder_size);
4949 dispose_chunk(m, remainder, remainder_size);
4950 }
4951 }
4952
4953 mem = chunk2mem(p);
4954 assert (chunksize(p) >= nb);
4955 assert(((size_t)mem & (alignment - 1)) == 0);
4956 check_inuse_chunk(m, p);
4957 POSTACTION(m);
4958 }
4959 }
4960 return mem;
4961}
4962
4963/*
4964 Common support for independent_X routines, handling
4965 all of the combinations that can result.
4966 The opts arg has:
4967 bit 0 set if all elements are same size (using sizes[0])
4968 bit 1 set if elements should be zeroed
4969*/
4970static void** ialloc(mstate m,
4971 size_t n_elements,
4972 size_t* sizes,
4973 int opts,
4974 void* chunks[]) {
4975
4976 size_t element_size; /* chunksize of each element, if all same */
4977 size_t contents_size; /* total size of elements */
4978 size_t array_size; /* request size of pointer array */
4979 void* mem; /* malloced aggregate space */
4980 mchunkptr p; /* corresponding chunk */
4981 size_t remainder_size; /* remaining bytes while splitting */
4982 void** marray; /* either "chunks" or malloced ptr array */
4983 mchunkptr array_chunk; /* chunk for malloced ptr array */
4984 flag_t was_enabled; /* to disable mmap */
4985 size_t size;
4986 size_t i;
4987
4988 ensure_initialization();
4989 /* compute array length, if needed */
4990 if (chunks != 0) {
4991 if (n_elements == 0)
4992 return chunks; /* nothing to do */
4993 marray = chunks;
4994 array_size = 0;
4995 }
4996 else {
4997 /* if empty req, must still return chunk representing empty array */
4998 if (n_elements == 0)
4999 return (void**)internal_malloc(m, 0);
5000 marray = 0;
5001 array_size = request2size(n_elements * (sizeof(void*)));
5002 }
5003
5004 /* compute total element size */
5005 if (opts & 0x1) { /* all-same-size */
5006 element_size = request2size(*sizes);
5007 contents_size = n_elements * element_size;
5008 }
5009 else { /* add up all the sizes */
5010 element_size = 0;
5011 contents_size = 0;
5012 for (i = 0; i != n_elements; ++i)
5013 contents_size += request2size(sizes[i]);
5014 }
5015
5016 size = contents_size + array_size;
5017
5018 /*
5019 Allocate the aggregate chunk. First disable direct-mmapping so
5020 malloc won't use it, since we would not be able to later
5021 free/realloc space internal to a segregated mmap region.
5022 */
5023 was_enabled = use_mmap(m);
5024 disable_mmap(m);
5025 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5026 if (was_enabled)
5027 enable_mmap(m);
5028 if (mem == 0)
5029 return 0;
5030
5031 if (PREACTION(m)) return 0;
5032 p = mem2chunk(mem);
5033 remainder_size = chunksize(p);
5034
5035 assert(!is_mmapped(p));
5036
5037 if (opts & 0x2) { /* optionally clear the elements */
5038 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5039 }
5040
5041 /* If not provided, allocate the pointer array as final part of chunk */
5042 if (marray == 0) {
5043 size_t array_chunk_size;
5044 array_chunk = chunk_plus_offset(p, contents_size);
5045 array_chunk_size = remainder_size - contents_size;
5046 marray = (void**) (chunk2mem(array_chunk));
5047 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5048 remainder_size = contents_size;
5049 }
5050
5051 /* split out elements */
5052 for (i = 0; ; ++i) {
5053 marray[i] = chunk2mem(p);
5054 if (i != n_elements-1) {
5055 if (element_size != 0)
5056 size = element_size;
5057 else
5058 size = request2size(sizes[i]);
5059 remainder_size -= size;
5060 set_size_and_pinuse_of_inuse_chunk(m, p, size);
5061 p = chunk_plus_offset(p, size);
5062 }
5063 else { /* the final element absorbs any overallocation slop */
5064 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5065 break;
5066 }
5067 }
5068
5069#if DEBUG
5070 if (marray != chunks) {
5071 /* final element must have exactly exhausted chunk */
5072 if (element_size != 0) {
5073 assert(remainder_size == element_size);
5074 }
5075 else {
5076 assert(remainder_size == request2size(sizes[i]));
5077 }
5078 check_inuse_chunk(m, mem2chunk(marray));
5079 }
5080 for (i = 0; i != n_elements; ++i)
5081 check_inuse_chunk(m, mem2chunk(marray[i]));
5082
5083#endif /* DEBUG */
5084
5085 POSTACTION(m);
5086 return marray;
5087}
5088
5089/* Try to free all pointers in the given array.
5090 Note: this could be made faster, by delaying consolidation,
5091 at the price of disabling some user integrity checks, We
5092 still optimize some consolidations by combining adjacent
5093 chunks before freeing, which will occur often if allocated
5094 with ialloc or the array is sorted.
5095*/
5096static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5097 size_t unfreed = 0;
5098 if (!PREACTION(m)) {
5099 void** a;
5100 void** fence = &(array[nelem]);
5101 for (a = array; a != fence; ++a) {
5102 void* mem = *a;
5103 if (mem != 0) {
5104 mchunkptr p = mem2chunk(mem);
5105 size_t psize = chunksize(p);
5106#if FOOTERS
5107 if (get_mstate_for(p) != m) {
5108 ++unfreed;
5109 continue;
5110 }
5111#endif
5112 check_inuse_chunk(m, p);
5113 *a = 0;
5114 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5115 void ** b = a + 1; /* try to merge with next chunk */
5116 mchunkptr next = next_chunk(p);
5117 if (b != fence && *b == chunk2mem(next)) {
5118 size_t newsize = chunksize(next) + psize;
5119 set_inuse(m, p, newsize);
5120 *b = chunk2mem(p);
5121 }
5122 else
5123 dispose_chunk(m, p, psize);
5124 }
5125 else {
5126 CORRUPTION_ERROR_ACTION(m);
5127 break;
5128 }
5129 }
5130 }
5131 if (should_trim(m, m->topsize))
5132 sys_trim(m, 0);
5133 POSTACTION(m);
5134 }
5135 return unfreed;
5136}
5137
5138/* Traversal */
5139#if MALLOC_INSPECT_ALL
5140static void internal_inspect_all(mstate m,
5141 void(*handler)(void *start,
5142 void *end,
5143 size_t used_bytes,
5144 void* callback_arg),
5145 void* arg) {
5146 if (is_initialized(m)) {
5147 mchunkptr top = m->top;
5148 msegmentptr s;
5149 for (s = &m->seg; s != 0; s = s->next) {
5150 mchunkptr q = align_as_chunk(s->base);
5151 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5152 mchunkptr next = next_chunk(q);
5153 size_t sz = chunksize(q);
5154 size_t used;
5155 void* start;
5156 if (is_inuse(q)) {
5157 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5158 start = chunk2mem(q);
5159 }
5160 else {
5161 used = 0;
5162 if (is_small(sz)) { /* offset by possible bookkeeping */
Ian Rogers99908912012-08-17 17:28:15 -07005163 start = (void*)((char*)q + sizeof(struct malloc_chunk));
Ian Rogers99908912012-08-17 17:28:15 -07005164 }
5165 else {
Ian Rogers99908912012-08-17 17:28:15 -07005166 start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
Ian Rogers99908912012-08-17 17:28:15 -07005167 }
5168 }
5169 if (start < (void*)next) /* skip if all space is bookkeeping */
5170 handler(start, next, used, arg);
5171 if (q == top)
5172 break;
5173 q = next;
5174 }
5175 }
5176 }
5177}
5178#endif /* MALLOC_INSPECT_ALL */
5179
5180/* ------------------ Exported realloc, memalign, etc -------------------- */
5181
5182#if !ONLY_MSPACES
5183
5184void* dlrealloc(void* oldmem, size_t bytes) {
5185 void* mem = 0;
5186 if (oldmem == 0) {
5187 mem = dlmalloc(bytes);
5188 }
5189 else if (bytes >= MAX_REQUEST) {
5190 MALLOC_FAILURE_ACTION;
5191 }
5192#ifdef REALLOC_ZERO_BYTES_FREES
5193 else if (bytes == 0) {
5194 dlfree(oldmem);
5195 }
5196#endif /* REALLOC_ZERO_BYTES_FREES */
5197 else {
5198 size_t nb = request2size(bytes);
5199 mchunkptr oldp = mem2chunk(oldmem);
5200#if ! FOOTERS
5201 mstate m = gm;
5202#else /* FOOTERS */
5203 mstate m = get_mstate_for(oldp);
5204 if (!ok_magic(m)) {
5205 USAGE_ERROR_ACTION(m, oldmem);
5206 return 0;
5207 }
5208#endif /* FOOTERS */
5209 if (!PREACTION(m)) {
5210 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5211 POSTACTION(m);
5212 if (newp != 0) {
5213 check_inuse_chunk(m, newp);
5214 mem = chunk2mem(newp);
5215 }
5216 else {
5217 mem = internal_malloc(m, bytes);
5218 if (mem != 0) {
5219 size_t oc = chunksize(oldp) - overhead_for(oldp);
5220 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5221 internal_free(m, oldmem);
5222 }
5223 }
5224 }
5225 }
5226 return mem;
5227}
5228
5229void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5230 void* mem = 0;
5231 if (oldmem != 0) {
5232 if (bytes >= MAX_REQUEST) {
5233 MALLOC_FAILURE_ACTION;
5234 }
5235 else {
5236 size_t nb = request2size(bytes);
5237 mchunkptr oldp = mem2chunk(oldmem);
5238#if ! FOOTERS
5239 mstate m = gm;
5240#else /* FOOTERS */
5241 mstate m = get_mstate_for(oldp);
5242 if (!ok_magic(m)) {
5243 USAGE_ERROR_ACTION(m, oldmem);
5244 return 0;
5245 }
5246#endif /* FOOTERS */
5247 if (!PREACTION(m)) {
5248 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5249 POSTACTION(m);
5250 if (newp == oldp) {
5251 check_inuse_chunk(m, newp);
5252 mem = oldmem;
5253 }
5254 }
5255 }
5256 }
5257 return mem;
5258}
5259
5260void* dlmemalign(size_t alignment, size_t bytes) {
5261 if (alignment <= MALLOC_ALIGNMENT) {
5262 return dlmalloc(bytes);
5263 }
5264 return internal_memalign(gm, alignment, bytes);
5265}
5266
5267int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5268 void* mem = 0;
5269 if (alignment == MALLOC_ALIGNMENT)
5270 mem = dlmalloc(bytes);
5271 else {
5272 size_t d = alignment / sizeof(void*);
5273 size_t r = alignment % sizeof(void*);
5274 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5275 return EINVAL;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005276 else if (bytes <= MAX_REQUEST - alignment) {
Ian Rogers99908912012-08-17 17:28:15 -07005277 if (alignment < MIN_CHUNK_SIZE)
5278 alignment = MIN_CHUNK_SIZE;
5279 mem = internal_memalign(gm, alignment, bytes);
5280 }
5281 }
5282 if (mem == 0)
5283 return ENOMEM;
5284 else {
5285 *pp = mem;
5286 return 0;
5287 }
5288}
5289
5290void* dlvalloc(size_t bytes) {
5291 size_t pagesz;
5292 ensure_initialization();
5293 pagesz = mparams.page_size;
5294 return dlmemalign(pagesz, bytes);
5295}
5296
5297void* dlpvalloc(size_t bytes) {
5298 size_t pagesz;
5299 ensure_initialization();
5300 pagesz = mparams.page_size;
5301 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5302}
5303
5304void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5305 void* chunks[]) {
5306 size_t sz = elem_size; /* serves as 1-element array */
5307 return ialloc(gm, n_elements, &sz, 3, chunks);
5308}
5309
5310void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5311 void* chunks[]) {
5312 return ialloc(gm, n_elements, sizes, 0, chunks);
5313}
5314
5315size_t dlbulk_free(void* array[], size_t nelem) {
5316 return internal_bulk_free(gm, array, nelem);
5317}
5318
5319#if MALLOC_INSPECT_ALL
5320void dlmalloc_inspect_all(void(*handler)(void *start,
5321 void *end,
5322 size_t used_bytes,
5323 void* callback_arg),
5324 void* arg) {
5325 ensure_initialization();
5326 if (!PREACTION(gm)) {
5327 internal_inspect_all(gm, handler, arg);
5328 POSTACTION(gm);
5329 }
5330}
5331#endif /* MALLOC_INSPECT_ALL */
5332
5333int dlmalloc_trim(size_t pad) {
5334 int result = 0;
5335 ensure_initialization();
5336 if (!PREACTION(gm)) {
5337 result = sys_trim(gm, pad);
5338 POSTACTION(gm);
5339 }
5340 return result;
5341}
5342
5343size_t dlmalloc_footprint(void) {
5344 return gm->footprint;
5345}
5346
5347size_t dlmalloc_max_footprint(void) {
5348 return gm->max_footprint;
5349}
5350
5351size_t dlmalloc_footprint_limit(void) {
5352 size_t maf = gm->footprint_limit;
5353 return maf == 0 ? MAX_SIZE_T : maf;
5354}
5355
5356size_t dlmalloc_set_footprint_limit(size_t bytes) {
5357 size_t result; /* invert sense of 0 */
5358 if (bytes == 0)
5359 result = granularity_align(1); /* Use minimal size */
5360 if (bytes == MAX_SIZE_T)
5361 result = 0; /* disable */
5362 else
5363 result = granularity_align(bytes);
5364 return gm->footprint_limit = result;
5365}
5366
5367#if !NO_MALLINFO
5368struct mallinfo dlmallinfo(void) {
5369 return internal_mallinfo(gm);
5370}
5371#endif /* NO_MALLINFO */
5372
5373#if !NO_MALLOC_STATS
5374void dlmalloc_stats() {
5375 internal_malloc_stats(gm);
5376}
5377#endif /* NO_MALLOC_STATS */
5378
5379int dlmallopt(int param_number, int value) {
5380 return change_mparam(param_number, value);
5381}
5382
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005383/* BEGIN android-changed: added const */
5384size_t dlmalloc_usable_size(const void* mem) {
5385/* END android-change */
Ian Rogers99908912012-08-17 17:28:15 -07005386 if (mem != 0) {
5387 mchunkptr p = mem2chunk(mem);
5388 if (is_inuse(p))
5389 return chunksize(p) - overhead_for(p);
5390 }
5391 return 0;
5392}
5393
5394#endif /* !ONLY_MSPACES */
5395
5396/* ----------------------------- user mspaces ---------------------------- */
5397
5398#if MSPACES
5399
5400static mstate init_user_mstate(char* tbase, size_t tsize) {
5401 size_t msize = pad_request(sizeof(struct malloc_state));
5402 mchunkptr mn;
5403 mchunkptr msp = align_as_chunk(tbase);
5404 mstate m = (mstate)(chunk2mem(msp));
5405 memset(m, 0, msize);
5406 (void)INITIAL_LOCK(&m->mutex);
5407 msp->head = (msize|INUSE_BITS);
5408 m->seg.base = m->least_addr = tbase;
5409 m->seg.size = m->footprint = m->max_footprint = tsize;
5410 m->magic = mparams.magic;
5411 m->release_checks = MAX_RELEASE_CHECK_RATE;
5412 m->mflags = mparams.default_mflags;
5413 m->extp = 0;
5414 m->exts = 0;
5415 disable_contiguous(m);
5416 init_bins(m);
5417 mn = next_chunk(mem2chunk(m));
5418 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5419 check_top_chunk(m, m->top);
5420 return m;
5421}
5422
5423mspace create_mspace(size_t capacity, int locked) {
5424 mstate m = 0;
5425 size_t msize;
5426 ensure_initialization();
5427 msize = pad_request(sizeof(struct malloc_state));
5428 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5429 size_t rs = ((capacity == 0)? mparams.granularity :
5430 (capacity + TOP_FOOT_SIZE + msize));
5431 size_t tsize = granularity_align(rs);
5432 char* tbase = (char*)(CALL_MMAP(tsize));
5433 if (tbase != CMFAIL) {
5434 m = init_user_mstate(tbase, tsize);
5435 m->seg.sflags = USE_MMAP_BIT;
5436 set_lock(m, locked);
5437 }
5438 }
5439 return (mspace)m;
5440}
5441
5442mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5443 mstate m = 0;
5444 size_t msize;
5445 ensure_initialization();
5446 msize = pad_request(sizeof(struct malloc_state));
5447 if (capacity > msize + TOP_FOOT_SIZE &&
5448 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5449 m = init_user_mstate((char*)base, capacity);
5450 m->seg.sflags = EXTERN_BIT;
5451 set_lock(m, locked);
5452 }
5453 return (mspace)m;
5454}
5455
5456int mspace_track_large_chunks(mspace msp, int enable) {
5457 int ret = 0;
5458 mstate ms = (mstate)msp;
5459 if (!PREACTION(ms)) {
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005460 if (!use_mmap(ms)) {
Ian Rogers99908912012-08-17 17:28:15 -07005461 ret = 1;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005462 }
5463 if (!enable) {
Ian Rogers99908912012-08-17 17:28:15 -07005464 enable_mmap(ms);
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005465 } else {
Ian Rogers99908912012-08-17 17:28:15 -07005466 disable_mmap(ms);
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005467 }
Ian Rogers99908912012-08-17 17:28:15 -07005468 POSTACTION(ms);
5469 }
5470 return ret;
5471}
5472
5473size_t destroy_mspace(mspace msp) {
5474 size_t freed = 0;
5475 mstate ms = (mstate)msp;
5476 if (ok_magic(ms)) {
5477 msegmentptr sp = &ms->seg;
5478 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5479 while (sp != 0) {
5480 char* base = sp->base;
Ian Rogers99908912012-08-17 17:28:15 -07005481 size_t size = sp->size;
5482 flag_t flag = sp->sflags;
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005483 (void)base; /* placate people compiling -Wunused-variable */
Ian Rogers99908912012-08-17 17:28:15 -07005484 sp = sp->next;
5485 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5486 CALL_MUNMAP(base, size) == 0)
5487 freed += size;
5488 }
5489 }
5490 else {
5491 USAGE_ERROR_ACTION(ms,ms);
5492 }
5493 return freed;
5494}
5495
5496/*
5497 mspace versions of routines are near-clones of the global
5498 versions. This is not so nice but better than the alternatives.
5499*/
5500
5501void* mspace_malloc(mspace msp, size_t bytes) {
5502 mstate ms = (mstate)msp;
5503 if (!ok_magic(ms)) {
5504 USAGE_ERROR_ACTION(ms,ms);
5505 return 0;
5506 }
5507 if (!PREACTION(ms)) {
5508 void* mem;
5509 size_t nb;
5510 if (bytes <= MAX_SMALL_REQUEST) {
5511 bindex_t idx;
5512 binmap_t smallbits;
5513 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5514 idx = small_index(nb);
5515 smallbits = ms->smallmap >> idx;
5516
5517 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5518 mchunkptr b, p;
5519 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5520 b = smallbin_at(ms, idx);
5521 p = b->fd;
5522 assert(chunksize(p) == small_index2size(idx));
5523 unlink_first_small_chunk(ms, b, p, idx);
5524 set_inuse_and_pinuse(ms, p, small_index2size(idx));
5525 mem = chunk2mem(p);
5526 check_malloced_chunk(ms, mem, nb);
5527 goto postaction;
5528 }
5529
5530 else if (nb > ms->dvsize) {
5531 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5532 mchunkptr b, p, r;
5533 size_t rsize;
5534 bindex_t i;
5535 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5536 binmap_t leastbit = least_bit(leftbits);
5537 compute_bit2idx(leastbit, i);
5538 b = smallbin_at(ms, i);
5539 p = b->fd;
5540 assert(chunksize(p) == small_index2size(i));
5541 unlink_first_small_chunk(ms, b, p, i);
5542 rsize = small_index2size(i) - nb;
5543 /* Fit here cannot be remainderless if 4byte sizes */
5544 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5545 set_inuse_and_pinuse(ms, p, small_index2size(i));
5546 else {
5547 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5548 r = chunk_plus_offset(p, nb);
5549 set_size_and_pinuse_of_free_chunk(r, rsize);
5550 replace_dv(ms, r, rsize);
5551 }
5552 mem = chunk2mem(p);
5553 check_malloced_chunk(ms, mem, nb);
5554 goto postaction;
5555 }
5556
5557 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5558 check_malloced_chunk(ms, mem, nb);
5559 goto postaction;
5560 }
5561 }
5562 }
5563 else if (bytes >= MAX_REQUEST)
5564 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5565 else {
5566 nb = pad_request(bytes);
5567 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5568 check_malloced_chunk(ms, mem, nb);
5569 goto postaction;
5570 }
5571 }
5572
5573 if (nb <= ms->dvsize) {
5574 size_t rsize = ms->dvsize - nb;
5575 mchunkptr p = ms->dv;
5576 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5577 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5578 ms->dvsize = rsize;
5579 set_size_and_pinuse_of_free_chunk(r, rsize);
5580 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5581 }
5582 else { /* exhaust dv */
5583 size_t dvs = ms->dvsize;
5584 ms->dvsize = 0;
5585 ms->dv = 0;
5586 set_inuse_and_pinuse(ms, p, dvs);
5587 }
5588 mem = chunk2mem(p);
5589 check_malloced_chunk(ms, mem, nb);
5590 goto postaction;
5591 }
5592
5593 else if (nb < ms->topsize) { /* Split top */
5594 size_t rsize = ms->topsize -= nb;
5595 mchunkptr p = ms->top;
5596 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5597 r->head = rsize | PINUSE_BIT;
5598 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5599 mem = chunk2mem(p);
5600 check_top_chunk(ms, ms->top);
5601 check_malloced_chunk(ms, mem, nb);
5602 goto postaction;
5603 }
5604
5605 mem = sys_alloc(ms, nb);
5606
5607 postaction:
5608 POSTACTION(ms);
5609 return mem;
5610 }
5611
5612 return 0;
5613}
5614
5615void mspace_free(mspace msp, void* mem) {
5616 if (mem != 0) {
5617 mchunkptr p = mem2chunk(mem);
5618#if FOOTERS
5619 mstate fm = get_mstate_for(p);
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005620 (void)msp; /* placate people compiling -Wunused */
Ian Rogers99908912012-08-17 17:28:15 -07005621#else /* FOOTERS */
5622 mstate fm = (mstate)msp;
5623#endif /* FOOTERS */
5624 if (!ok_magic(fm)) {
5625 USAGE_ERROR_ACTION(fm, p);
5626 return;
5627 }
5628 if (!PREACTION(fm)) {
5629 check_inuse_chunk(fm, p);
5630 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5631 size_t psize = chunksize(p);
5632 mchunkptr next = chunk_plus_offset(p, psize);
5633 if (!pinuse(p)) {
5634 size_t prevsize = p->prev_foot;
5635 if (is_mmapped(p)) {
5636 psize += prevsize + MMAP_FOOT_PAD;
5637 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5638 fm->footprint -= psize;
5639 goto postaction;
5640 }
5641 else {
5642 mchunkptr prev = chunk_minus_offset(p, prevsize);
5643 psize += prevsize;
5644 p = prev;
5645 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5646 if (p != fm->dv) {
5647 unlink_chunk(fm, p, prevsize);
5648 }
5649 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5650 fm->dvsize = psize;
5651 set_free_with_pinuse(p, psize, next);
5652 goto postaction;
5653 }
5654 }
5655 else
5656 goto erroraction;
5657 }
5658 }
5659
5660 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5661 if (!cinuse(next)) { /* consolidate forward */
5662 if (next == fm->top) {
5663 size_t tsize = fm->topsize += psize;
5664 fm->top = p;
5665 p->head = tsize | PINUSE_BIT;
5666 if (p == fm->dv) {
5667 fm->dv = 0;
5668 fm->dvsize = 0;
5669 }
5670 if (should_trim(fm, tsize))
5671 sys_trim(fm, 0);
5672 goto postaction;
5673 }
5674 else if (next == fm->dv) {
5675 size_t dsize = fm->dvsize += psize;
5676 fm->dv = p;
5677 set_size_and_pinuse_of_free_chunk(p, dsize);
5678 goto postaction;
5679 }
5680 else {
5681 size_t nsize = chunksize(next);
5682 psize += nsize;
5683 unlink_chunk(fm, next, nsize);
5684 set_size_and_pinuse_of_free_chunk(p, psize);
5685 if (p == fm->dv) {
5686 fm->dvsize = psize;
5687 goto postaction;
5688 }
5689 }
5690 }
5691 else
5692 set_free_with_pinuse(p, psize, next);
5693
5694 if (is_small(psize)) {
5695 insert_small_chunk(fm, p, psize);
5696 check_free_chunk(fm, p);
5697 }
5698 else {
5699 tchunkptr tp = (tchunkptr)p;
5700 insert_large_chunk(fm, tp, psize);
5701 check_free_chunk(fm, p);
5702 if (--fm->release_checks == 0)
5703 release_unused_segments(fm);
5704 }
5705 goto postaction;
5706 }
5707 }
5708 erroraction:
5709 USAGE_ERROR_ACTION(fm, p);
5710 postaction:
5711 POSTACTION(fm);
5712 }
5713 }
5714}
5715
5716void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5717 void* mem;
5718 size_t req = 0;
5719 mstate ms = (mstate)msp;
5720 if (!ok_magic(ms)) {
5721 USAGE_ERROR_ACTION(ms,ms);
5722 return 0;
5723 }
5724 if (n_elements != 0) {
5725 req = n_elements * elem_size;
5726 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5727 (req / n_elements != elem_size))
5728 req = MAX_SIZE_T; /* force downstream failure on overflow */
5729 }
5730 mem = internal_malloc(ms, req);
5731 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5732 memset(mem, 0, req);
5733 return mem;
5734}
5735
5736void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5737 void* mem = 0;
5738 if (oldmem == 0) {
5739 mem = mspace_malloc(msp, bytes);
5740 }
5741 else if (bytes >= MAX_REQUEST) {
5742 MALLOC_FAILURE_ACTION;
5743 }
5744#ifdef REALLOC_ZERO_BYTES_FREES
5745 else if (bytes == 0) {
5746 mspace_free(msp, oldmem);
5747 }
5748#endif /* REALLOC_ZERO_BYTES_FREES */
5749 else {
5750 size_t nb = request2size(bytes);
5751 mchunkptr oldp = mem2chunk(oldmem);
5752#if ! FOOTERS
5753 mstate m = (mstate)msp;
5754#else /* FOOTERS */
5755 mstate m = get_mstate_for(oldp);
5756 if (!ok_magic(m)) {
5757 USAGE_ERROR_ACTION(m, oldmem);
5758 return 0;
5759 }
5760#endif /* FOOTERS */
5761 if (!PREACTION(m)) {
5762 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5763 POSTACTION(m);
5764 if (newp != 0) {
5765 check_inuse_chunk(m, newp);
5766 mem = chunk2mem(newp);
5767 }
5768 else {
5769 mem = mspace_malloc(m, bytes);
5770 if (mem != 0) {
5771 size_t oc = chunksize(oldp) - overhead_for(oldp);
5772 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5773 mspace_free(m, oldmem);
5774 }
5775 }
5776 }
5777 }
5778 return mem;
5779}
5780
5781void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5782 void* mem = 0;
5783 if (oldmem != 0) {
5784 if (bytes >= MAX_REQUEST) {
5785 MALLOC_FAILURE_ACTION;
5786 }
5787 else {
5788 size_t nb = request2size(bytes);
5789 mchunkptr oldp = mem2chunk(oldmem);
5790#if ! FOOTERS
5791 mstate m = (mstate)msp;
5792#else /* FOOTERS */
5793 mstate m = get_mstate_for(oldp);
Ian Rogersc6d95ad2012-08-29 14:04:53 -07005794 (void)msp; /* placate people compiling -Wunused */
Ian Rogers99908912012-08-17 17:28:15 -07005795 if (!ok_magic(m)) {
5796 USAGE_ERROR_ACTION(m, oldmem);
5797 return 0;
5798 }
5799#endif /* FOOTERS */
5800 if (!PREACTION(m)) {
5801 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5802 POSTACTION(m);
5803 if (newp == oldp) {
5804 check_inuse_chunk(m, newp);
5805 mem = oldmem;
5806 }
5807 }
5808 }
5809 }
5810 return mem;
5811}
5812
5813void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5814 mstate ms = (mstate)msp;
5815 if (!ok_magic(ms)) {
5816 USAGE_ERROR_ACTION(ms,ms);
5817 return 0;
5818 }
5819 if (alignment <= MALLOC_ALIGNMENT)
5820 return mspace_malloc(msp, bytes);
5821 return internal_memalign(ms, alignment, bytes);
5822}
5823
5824void** mspace_independent_calloc(mspace msp, size_t n_elements,
5825 size_t elem_size, void* chunks[]) {
5826 size_t sz = elem_size; /* serves as 1-element array */
5827 mstate ms = (mstate)msp;
5828 if (!ok_magic(ms)) {
5829 USAGE_ERROR_ACTION(ms,ms);
5830 return 0;
5831 }
5832 return ialloc(ms, n_elements, &sz, 3, chunks);
5833}
5834
5835void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5836 size_t sizes[], void* chunks[]) {
5837 mstate ms = (mstate)msp;
5838 if (!ok_magic(ms)) {
5839 USAGE_ERROR_ACTION(ms,ms);
5840 return 0;
5841 }
5842 return ialloc(ms, n_elements, sizes, 0, chunks);
5843}
5844
5845size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5846 return internal_bulk_free((mstate)msp, array, nelem);
5847}
5848
5849#if MALLOC_INSPECT_ALL
5850void mspace_inspect_all(mspace msp,
5851 void(*handler)(void *start,
5852 void *end,
5853 size_t used_bytes,
5854 void* callback_arg),
5855 void* arg) {
5856 mstate ms = (mstate)msp;
5857 if (ok_magic(ms)) {
5858 if (!PREACTION(ms)) {
5859 internal_inspect_all(ms, handler, arg);
5860 POSTACTION(ms);
5861 }
5862 }
5863 else {
5864 USAGE_ERROR_ACTION(ms,ms);
5865 }
5866}
5867#endif /* MALLOC_INSPECT_ALL */
5868
5869int mspace_trim(mspace msp, size_t pad) {
5870 int result = 0;
5871 mstate ms = (mstate)msp;
5872 if (ok_magic(ms)) {
5873 if (!PREACTION(ms)) {
5874 result = sys_trim(ms, pad);
5875 POSTACTION(ms);
5876 }
5877 }
5878 else {
5879 USAGE_ERROR_ACTION(ms,ms);
5880 }
5881 return result;
5882}
5883
5884#if !NO_MALLOC_STATS
5885void mspace_malloc_stats(mspace msp) {
5886 mstate ms = (mstate)msp;
5887 if (ok_magic(ms)) {
5888 internal_malloc_stats(ms);
5889 }
5890 else {
5891 USAGE_ERROR_ACTION(ms,ms);
5892 }
5893}
5894#endif /* NO_MALLOC_STATS */
5895
5896size_t mspace_footprint(mspace msp) {
5897 size_t result = 0;
5898 mstate ms = (mstate)msp;
5899 if (ok_magic(ms)) {
5900 result = ms->footprint;
5901 }
5902 else {
5903 USAGE_ERROR_ACTION(ms,ms);
5904 }
5905 return result;
5906}
5907
5908size_t mspace_max_footprint(mspace msp) {
5909 size_t result = 0;
5910 mstate ms = (mstate)msp;
5911 if (ok_magic(ms)) {
5912 result = ms->max_footprint;
5913 }
5914 else {
5915 USAGE_ERROR_ACTION(ms,ms);
5916 }
5917 return result;
5918}
5919
5920size_t mspace_footprint_limit(mspace msp) {
5921 size_t result = 0;
5922 mstate ms = (mstate)msp;
5923 if (ok_magic(ms)) {
5924 size_t maf = ms->footprint_limit;
5925 result = (maf == 0) ? MAX_SIZE_T : maf;
5926 }
5927 else {
5928 USAGE_ERROR_ACTION(ms,ms);
5929 }
5930 return result;
5931}
5932
5933size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5934 size_t result = 0;
5935 mstate ms = (mstate)msp;
5936 if (ok_magic(ms)) {
5937 if (bytes == 0)
5938 result = granularity_align(1); /* Use minimal size */
5939 if (bytes == MAX_SIZE_T)
5940 result = 0; /* disable */
5941 else
5942 result = granularity_align(bytes);
5943 ms->footprint_limit = result;
5944 }
5945 else {
5946 USAGE_ERROR_ACTION(ms,ms);
5947 }
5948 return result;
5949}
5950
5951#if !NO_MALLINFO
5952struct mallinfo mspace_mallinfo(mspace msp) {
5953 mstate ms = (mstate)msp;
5954 if (!ok_magic(ms)) {
5955 USAGE_ERROR_ACTION(ms,ms);
5956 }
5957 return internal_mallinfo(ms);
5958}
5959#endif /* NO_MALLINFO */
5960
Ian Rogers99908912012-08-17 17:28:15 -07005961size_t mspace_usable_size(const void* mem) {
Ian Rogers99908912012-08-17 17:28:15 -07005962 if (mem != 0) {
5963 mchunkptr p = mem2chunk(mem);
5964 if (is_inuse(p))
5965 return chunksize(p) - overhead_for(p);
5966 }
5967 return 0;
5968}
5969
5970int mspace_mallopt(int param_number, int value) {
5971 return change_mparam(param_number, value);
5972}
5973
5974#endif /* MSPACES */
5975
5976
5977/* -------------------- Alternative MORECORE functions ------------------- */
5978
5979/*
5980 Guidelines for creating a custom version of MORECORE:
5981
5982 * For best performance, MORECORE should allocate in multiples of pagesize.
5983 * MORECORE may allocate more memory than requested. (Or even less,
5984 but this will usually result in a malloc failure.)
5985 * MORECORE must not allocate memory when given argument zero, but
5986 instead return one past the end address of memory from previous
5987 nonzero call.
5988 * For best performance, consecutive calls to MORECORE with positive
5989 arguments should return increasing addresses, indicating that
5990 space has been contiguously extended.
5991 * Even though consecutive calls to MORECORE need not return contiguous
5992 addresses, it must be OK for malloc'ed chunks to span multiple
5993 regions in those cases where they do happen to be contiguous.
5994 * MORECORE need not handle negative arguments -- it may instead
5995 just return MFAIL when given negative arguments.
5996 Negative arguments are always multiples of pagesize. MORECORE
5997 must not misinterpret negative args as large positive unsigned
5998 args. You can suppress all such calls from even occurring by defining
5999 MORECORE_CANNOT_TRIM,
6000
6001 As an example alternative MORECORE, here is a custom allocator
6002 kindly contributed for pre-OSX macOS. It uses virtually but not
6003 necessarily physically contiguous non-paged memory (locked in,
6004 present and won't get swapped out). You can use it by uncommenting
6005 this section, adding some #includes, and setting up the appropriate
6006 defines above:
6007
6008 #define MORECORE osMoreCore
6009
6010 There is also a shutdown routine that should somehow be called for
6011 cleanup upon program exit.
6012
6013 #define MAX_POOL_ENTRIES 100
6014 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6015 static int next_os_pool;
6016 void *our_os_pools[MAX_POOL_ENTRIES];
6017
6018 void *osMoreCore(int size)
6019 {
6020 void *ptr = 0;
6021 static void *sbrk_top = 0;
6022
6023 if (size > 0)
6024 {
6025 if (size < MINIMUM_MORECORE_SIZE)
6026 size = MINIMUM_MORECORE_SIZE;
6027 if (CurrentExecutionLevel() == kTaskLevel)
6028 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6029 if (ptr == 0)
6030 {
6031 return (void *) MFAIL;
6032 }
6033 // save ptrs so they can be freed during cleanup
6034 our_os_pools[next_os_pool] = ptr;
6035 next_os_pool++;
6036 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6037 sbrk_top = (char *) ptr + size;
6038 return ptr;
6039 }
6040 else if (size < 0)
6041 {
6042 // we don't currently support shrink behavior
6043 return (void *) MFAIL;
6044 }
6045 else
6046 {
6047 return sbrk_top;
6048 }
6049 }
6050
6051 // cleanup any allocated memory pools
6052 // called as last thing before shutting down driver
6053
6054 void osCleanupMem(void)
6055 {
6056 void **ptr;
6057
6058 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6059 if (*ptr)
6060 {
6061 PoolDeallocate(*ptr);
6062 *ptr = 0;
6063 }
6064 }
6065
6066*/
6067
6068
6069/* -----------------------------------------------------------------------
6070History:
Ian Rogersc6d95ad2012-08-29 14:04:53 -07006071 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6072 * fix bad comparison in dlposix_memalign
6073 * don't reuse adjusted asize in sys_alloc
6074 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6075 * reduce compiler warnings -- thanks to all who reported/suggested these
6076
Ian Rogers99908912012-08-17 17:28:15 -07006077 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6078 * Always perform unlink checks unless INSECURE
6079 * Add posix_memalign.
6080 * Improve realloc to expand in more cases; expose realloc_in_place.
6081 Thanks to Peter Buhr for the suggestion.
6082 * Add footprint_limit, inspect_all, bulk_free. Thanks
6083 to Barry Hayes and others for the suggestions.
6084 * Internal refactorings to avoid calls while holding locks
6085 * Use non-reentrant locks by default. Thanks to Roland McGrath
6086 for the suggestion.
6087 * Small fixes to mspace_destroy, reset_on_error.
6088 * Various configuration extensions/changes. Thanks
6089 to all who contributed these.
6090
6091 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6092 * Update Creative Commons URL
6093
6094 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6095 * Use zeros instead of prev foot for is_mmapped
6096 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6097 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6098 * Fix insufficient sys_alloc padding when using 16byte alignment
6099 * Fix bad error check in mspace_footprint
6100 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6101 * Reentrant spin locks; thanks to Earl Chew and others
6102 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6103 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6104 * Extension hook in malloc_state
6105 * Various small adjustments to reduce warnings on some compilers
6106 * Various configuration extensions/changes for more platforms. Thanks
6107 to all who contributed these.
6108
6109 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6110 * Add max_footprint functions
6111 * Ensure all appropriate literals are size_t
6112 * Fix conditional compilation problem for some #define settings
6113 * Avoid concatenating segments with the one provided
6114 in create_mspace_with_base
6115 * Rename some variables to avoid compiler shadowing warnings
6116 * Use explicit lock initialization.
6117 * Better handling of sbrk interference.
6118 * Simplify and fix segment insertion, trimming and mspace_destroy
6119 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6120 * Thanks especially to Dennis Flanagan for help on these.
6121
6122 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6123 * Fix memalign brace error.
6124
6125 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6126 * Fix improper #endif nesting in C++
6127 * Add explicit casts needed for C++
6128
6129 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6130 * Use trees for large bins
6131 * Support mspaces
6132 * Use segments to unify sbrk-based and mmap-based system allocation,
6133 removing need for emulation on most platforms without sbrk.
6134 * Default safety checks
6135 * Optional footer checks. Thanks to William Robertson for the idea.
6136 * Internal code refactoring
6137 * Incorporate suggestions and platform-specific changes.
6138 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6139 Aaron Bachmann, Emery Berger, and others.
6140 * Speed up non-fastbin processing enough to remove fastbins.
6141 * Remove useless cfree() to avoid conflicts with other apps.
6142 * Remove internal memcpy, memset. Compilers handle builtins better.
6143 * Remove some options that no one ever used and rename others.
6144
6145 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6146 * Fix malloc_state bitmap array misdeclaration
6147
6148 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6149 * Allow tuning of FIRST_SORTED_BIN_SIZE
6150 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6151 * Better detection and support for non-contiguousness of MORECORE.
6152 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6153 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6154 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6155 * Raised default trim and map thresholds to 256K.
6156 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6157 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6158 * Branch-free bin calculation
6159 * Default trim and mmap thresholds now 256K.
6160
6161 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6162 * Introduce independent_comalloc and independent_calloc.
6163 Thanks to Michael Pachos for motivation and help.
6164 * Make optional .h file available
6165 * Allow > 2GB requests on 32bit systems.
6166 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6167 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6168 and Anonymous.
6169 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6170 helping test this.)
6171 * memalign: check alignment arg
6172 * realloc: don't try to shift chunks backwards, since this
6173 leads to more fragmentation in some programs and doesn't
6174 seem to help in any others.
6175 * Collect all cases in malloc requiring system memory into sysmalloc
6176 * Use mmap as backup to sbrk
6177 * Place all internal state in malloc_state
6178 * Introduce fastbins (although similar to 2.5.1)
6179 * Many minor tunings and cosmetic improvements
6180 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6181 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6182 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6183 * Include errno.h to support default failure action.
6184
6185 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6186 * return null for negative arguments
6187 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6188 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6189 (e.g. WIN32 platforms)
6190 * Cleanup header file inclusion for WIN32 platforms
6191 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6192 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6193 memory allocation routines
6194 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6195 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6196 usage of 'assert' in non-WIN32 code
6197 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6198 avoid infinite loop
6199 * Always call 'fREe()' rather than 'free()'
6200
6201 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6202 * Fixed ordering problem with boundary-stamping
6203
6204 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6205 * Added pvalloc, as recommended by H.J. Liu
6206 * Added 64bit pointer support mainly from Wolfram Gloger
6207 * Added anonymously donated WIN32 sbrk emulation
6208 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6209 * malloc_extend_top: fix mask error that caused wastage after
6210 foreign sbrks
6211 * Add linux mremap support code from HJ Liu
6212
6213 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6214 * Integrated most documentation with the code.
6215 * Add support for mmap, with help from
6216 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6217 * Use last_remainder in more cases.
6218 * Pack bins using idea from colin@nyx10.cs.du.edu
6219 * Use ordered bins instead of best-fit threshhold
6220 * Eliminate block-local decls to simplify tracing and debugging.
6221 * Support another case of realloc via move into top
6222 * Fix error occuring when initial sbrk_base not word-aligned.
6223 * Rely on page size for units instead of SBRK_UNIT to
6224 avoid surprises about sbrk alignment conventions.
6225 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6226 (raymond@es.ele.tue.nl) for the suggestion.
6227 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6228 * More precautions for cases where other routines call sbrk,
6229 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6230 * Added macros etc., allowing use in linux libc from
6231 H.J. Lu (hjl@gnu.ai.mit.edu)
6232 * Inverted this history list
6233
6234 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6235 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6236 * Removed all preallocation code since under current scheme
6237 the work required to undo bad preallocations exceeds
6238 the work saved in good cases for most test programs.
6239 * No longer use return list or unconsolidated bins since
6240 no scheme using them consistently outperforms those that don't
6241 given above changes.
6242 * Use best fit for very large chunks to prevent some worst-cases.
6243 * Added some support for debugging
6244
6245 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6246 * Removed footers when chunks are in use. Thanks to
6247 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6248
6249 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6250 * Added malloc_trim, with help from Wolfram Gloger
6251 (wmglo@Dent.MED.Uni-Muenchen.DE).
6252
6253 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6254
6255 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6256 * realloc: try to expand in both directions
6257 * malloc: swap order of clean-bin strategy;
6258 * realloc: only conditionally expand backwards
6259 * Try not to scavenge used bins
6260 * Use bin counts as a guide to preallocation
6261 * Occasionally bin return list chunks in first scan
6262 * Add a few optimizations from colin@nyx10.cs.du.edu
6263
6264 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6265 * faster bin computation & slightly different binning
6266 * merged all consolidations to one part of malloc proper
6267 (eliminating old malloc_find_space & malloc_clean_bin)
6268 * Scan 2 returns chunks (not just 1)
6269 * Propagate failure in realloc if malloc returns 0
6270 * Add stuff to allow compilation on non-ANSI compilers
6271 from kpv@research.att.com
6272
6273 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6274 * removed potential for odd address access in prev_chunk
6275 * removed dependency on getpagesize.h
6276 * misc cosmetics and a bit more internal documentation
6277 * anticosmetics: mangled names in macros to evade debugger strangeness
6278 * tested on sparc, hp-700, dec-mips, rs6000
6279 with gcc & native cc (hp, dec only) allowing
6280 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6281
6282 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6283 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6284 structure of old version, but most details differ.)
6285
6286*/