internal/fileinit: generate reflect data structures from raw descriptors

This CL takes a significantly different approach to generating support
for protobuf reflection. The previous approach involved generating a
large number of Go literals to represent the reflection information.
While that approach was correct, it resulted in too much binary bloat.

The approach taken here initializes the reflection information from
the raw descriptor proto, which is a relatively dense representation
of the protobuf reflection information. In order to keep initialization
cost low, several measures were taken:
* At program init, the bare minimum is parsed in order to initialize
naming information for enums, messages, extensions, and services declared
in the file. This is done because those top-level declarations are often
relevant for registration.
* Only upon first are most of the other data structures for protobuf
reflection actually initialized.
* Instead of using proto.Unmarshal, a hand-written unmarshaler is used.
This allows us to avoid a dependendency on the descriptor proto and also
because the API for the descriptor proto is fundamentally non-performant
since it requires an allocation for every primitive field.

At a high-level, the new implementation lives in internal/fileinit.

Several changes were made to other parts of the repository:
* cmd/protoc-gen-go:
  * Stop compressing the raw descriptors. While compression does reduce
the size of the descriptors by approximately 2x, it is a pre-mature
optimization since the descriptors themselves are around 1% of the total
binary bloat that is due to generated protobufs.
  * Seeding protobuf reflection from the raw descriptor significantly
simplifies the generator implementation since it is no longer responsible
for constructing a tree of Go literals to represent the same information.
  * We remove the generation of the shadow types and instead call
protoimpl.MessageType.MessageOf. Unfortunately, this incurs an allocation
for every call to ProtoReflect since we need to allocate a tuple that wraps
a pointer to the message value, and a pointer to message type.
* internal/impl:
  * We add a MessageType.GoType field and make it required that it is
set prior to first use. This is done so that we can avoid calling
MessageType.init except for when it is actually needed. The allows code
to call (*FooMessage)(nil).ProtoReflect().Type() without fearing that the
init code will run, possibly triggering a recursive deadlock (where the
init code depends on getting the Type of some dependency which may be
declared within the same file).
* internal/cmd/generate-types:
  * The code to generate reflect/prototype/protofile_list_gen.go was copied
and altered to generated internal/fileinit.desc_list_gen.go.

At a high-level this CL adds significant technical complexity.
However, this is offset by several possible future changes:
* The prototype package can be drastically simplified. We can probably
reimplement internal/legacy to use internal/fileinit instead, allowing us
to drop another dependency on the prototype package. As a result, we can
probably delete most of the constructor types in that package.
* With the prototype package significantly pruned, and the fact that generated
code no longer depend on depends on that package, we can consider merging
what's left of prototype into protodesc.

Change-Id: I6090f023f2e1b6afaf62bd3ae883566242e30715
Reviewed-on: https://go-review.googlesource.com/c/158539
Reviewed-by: Herbie Ong <herbie@google.com>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
diff --git a/internal/fileinit/desc.go b/internal/fileinit/desc.go
new file mode 100644
index 0000000..6e798d3
--- /dev/null
+++ b/internal/fileinit/desc.go
@@ -0,0 +1,474 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package fileinit constructs protoreflect.FileDescriptors from the encoded
+// file descriptor proto messages. This package uses a custom proto unmarshaler
+// 1) to avoid a dependency on the descriptor proto 2) for performance to keep
+// the initialization cost as low as possible.
+package fileinit
+
+import (
+	"fmt"
+	"reflect"
+	"sync"
+
+	pragma "github.com/golang/protobuf/v2/internal/pragma"
+	pfmt "github.com/golang/protobuf/v2/internal/typefmt"
+	"github.com/golang/protobuf/v2/proto"
+	pref "github.com/golang/protobuf/v2/reflect/protoreflect"
+	ptype "github.com/golang/protobuf/v2/reflect/prototype"
+)
+
+// FileBuilder construct a protoreflect.FileDescriptor from the
+// raw file descriptor and the Go types for declarations and dependencies.
+//
+//
+// Flattened Ordering
+//
+// The protobuf type system represents declarations as a tree. Certain nodes in
+// the tree require us to either associate it with a concrete Go type or to
+// resolve a dependency, which is information that must be provided separately
+// since it cannot be derived from the file descriptor alone.
+//
+// However, representing a tree as Go literals is difficult to simply do in a
+// space and time efficient way. Thus, we store them as a flattened list of
+// objects where the serialization order from the tree-based form is important.
+//
+// The "flattened ordering" is defined as a tree traversal of all enum, message,
+// extension, and service declarations using the following algorithm:
+//
+//	def VisitFileDecls(fd):
+//		for e in fd.Enums:      yield e
+//		for m in fd.Messages:   yield m
+//		for x in fd.Extensions: yield x
+//		for s in fd.Services:   yield s
+//		for m in fd.Messages:   yield from VisitMessageDecls(m)
+//
+//	def VisitMessageDecls(md):
+//		for e in md.Enums:      yield e
+//		for m in md.Messages:   yield m
+//		for x in md.Extensions: yield x
+//		for m in md.Messages:   yield from VisitMessageDecls(m)
+//
+// The traversal starts at the root file descriptor and yields each direct
+// declaration within each node before traversing into sub-declarations
+// that children themselves may have.
+type FileBuilder struct {
+	// RawDescriptor is the wire-encoded bytes of FileDescriptorProto.
+	RawDescriptor []byte
+
+	// GoTypes is a unique set of the Go types for all declarations and
+	// dependencies. Each type is represented as a zero value of the Go type.
+	//
+	// Declarations are Go types generated for enums and messages directly
+	// declared (not publicly imported) in the proto source file.
+	// Messages for map entries are included, but represented by nil.
+	// Enum declarations in "flattened ordering" come first, followed by
+	// message declarations in "flattened ordering". The length of each sub-list
+	// is len(EnumOutputTypes) and len(MessageOutputTypes), respectively.
+	//
+	// Dependencies are Go types for enums or messages referenced by
+	// message fields (excluding weak fields), for parent extended messages of
+	// extension fields, for enums or messages referenced by extension fields,
+	// and for input and output messages referenced by service methods.
+	// Dependencies must come after declarations, but the ordering of
+	// dependencies themselves is unspecified.
+	GoTypes []interface{}
+
+	// DependencyIndexes is an ordered list of indexes into GoTypes for the
+	// dependencies of messages, extensions, or services. There are 4 sub-lists
+	// each in "flattened ordering" concatenated back-to-back:
+	//	* Extension field targets: list of the extended parent message of
+	//	every extension. Length is len(ExtensionOutputTypes).
+	//	* Message field dependencies: list of the enum or message type
+	//	referred to by every message field.
+	//	* Extension field dependencies: list of the enum or message type
+	//	referred to by every extension field.
+	//	* Service method dependencies: list of the input and output message type
+	//	referred to by every service method.
+	DependencyIndexes []int32
+
+	// TODO: Provide a list of imported files.
+	// FileDependencies []pref.FileDescriptor
+
+	// TODO: Provide a list of extension types for options extensions.
+	// OptionDependencies []pref.ExtensionType
+
+	// EnumOutputTypes is where Init stores all initialized enum types
+	// in "flattened ordering".
+	EnumOutputTypes []pref.EnumType
+	// MessageOutputTypes is where Init stores all initialized message types
+	// in "flattened ordering"; this includes map entry types.
+	MessageOutputTypes []pref.MessageType
+	// ExtensionOutputTypes is where Init stores all initialized extension types
+	// in "flattened ordering".
+	ExtensionOutputTypes []pref.ExtensionType
+
+	// TODO: Provide ability for FileBuilder to handle registration?
+	// FilesRegistry *pref.Files
+	// TypesRegistry *pref.Types
+}
+
+// Init constructs a FileDescriptor given the parameters set in FileBuilder.
+// It assumes that the inputs are well-formed and panics if any inconsistencies
+// are encountered.
+func (fb FileBuilder) Init() pref.FileDescriptor {
+	fd := newFileDesc(fb)
+
+	for i := range fd.allEnums {
+		fb.EnumOutputTypes[i] = &fd.allEnums[i]
+	}
+	for i := range fd.allMessages {
+		fb.MessageOutputTypes[i] = &fd.allMessages[i]
+	}
+	for i := range fd.allExtensions {
+		fb.ExtensionOutputTypes[i] = &fd.allExtensions[i]
+	}
+	return fd
+}
+
+type (
+	// fileInit contains a copy of certain fields in FileBuilder for use during
+	// lazy initialization upon first use.
+	fileInit struct {
+		RawDescriptor     []byte
+		GoTypes           []interface{}
+		DependencyIndexes []int32
+	}
+	fileDesc struct {
+		fileInit
+
+		path         string
+		protoPackage pref.FullName
+
+		fileDecls
+
+		enums      enumDescs
+		messages   messageDescs
+		extensions extensionDescs
+		services   serviceDescs
+
+		once sync.Once
+		lazy *fileLazy // protected by once
+	}
+	fileDecls struct {
+		allEnums      []enumDesc
+		allMessages   []messageDesc
+		allExtensions []extensionDesc
+	}
+	fileLazy struct {
+		syntax  pref.Syntax
+		imports fileImports
+		byName  map[pref.FullName]pref.Descriptor
+		options []byte
+	}
+)
+
+func (fd *fileDesc) Parent() (pref.Descriptor, bool) { return nil, false }
+func (fd *fileDesc) Index() int                      { return 0 }
+func (fd *fileDesc) Syntax() pref.Syntax             { return fd.lazyInit().syntax }
+func (fd *fileDesc) Name() pref.Name                 { return fd.Package().Name() }
+func (fd *fileDesc) FullName() pref.FullName         { return fd.Package() }
+func (fd *fileDesc) IsPlaceholder() bool             { return false }
+func (fd *fileDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.FileOptions(), fd.lazyInit().options)
+}
+func (fd *fileDesc) Path() string                                     { return fd.path }
+func (fd *fileDesc) Package() pref.FullName                           { return fd.protoPackage }
+func (fd *fileDesc) Imports() pref.FileImports                        { return &fd.lazyInit().imports }
+func (fd *fileDesc) Enums() pref.EnumDescriptors                      { return &fd.enums }
+func (fd *fileDesc) Messages() pref.MessageDescriptors                { return &fd.messages }
+func (fd *fileDesc) Extensions() pref.ExtensionDescriptors            { return &fd.extensions }
+func (fd *fileDesc) Services() pref.ServiceDescriptors                { return &fd.services }
+func (fd *fileDesc) DescriptorByName(s pref.FullName) pref.Descriptor { return fd.lazyInit().byName[s] }
+func (fd *fileDesc) Format(s fmt.State, r rune)                       { pfmt.FormatDesc(s, r, fd) }
+func (fd *fileDesc) ProtoType(pref.FileDescriptor)                    {}
+func (fd *fileDesc) ProtoInternal(pragma.DoNotImplement)              {}
+
+type (
+	enumDesc struct {
+		baseDesc
+
+		lazy *enumLazy // protected by fileDesc.once
+	}
+	enumLazy struct {
+		typ reflect.Type
+		new func(pref.EnumNumber) pref.Enum
+
+		values     enumValueDescs
+		resvNames  names
+		resvRanges enumRanges
+		options    []byte
+	}
+	enumValueDesc struct {
+		baseDesc
+
+		number  pref.EnumNumber
+		options []byte
+	}
+)
+
+func (ed *enumDesc) GoType() reflect.Type            { return ed.lazyInit().typ }
+func (ed *enumDesc) New(n pref.EnumNumber) pref.Enum { return ed.lazyInit().new(n) }
+func (ed *enumDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.EnumOptions(), ed.lazyInit().options)
+}
+func (ed *enumDesc) Values() pref.EnumValueDescriptors { return &ed.lazyInit().values }
+func (ed *enumDesc) ReservedNames() pref.Names         { return &ed.lazyInit().resvNames }
+func (ed *enumDesc) ReservedRanges() pref.EnumRanges   { return &ed.lazyInit().resvRanges }
+func (ed *enumDesc) Format(s fmt.State, r rune)        { pfmt.FormatDesc(s, r, ed) }
+func (ed *enumDesc) ProtoType(pref.EnumDescriptor)     {}
+func (ed *enumDesc) lazyInit() *enumLazy {
+	ed.parentFile.lazyInit() // implicitly initializes enumLazy
+	return ed.lazy
+}
+
+func (ed *enumValueDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.EnumValueOptions(), ed.options)
+}
+func (ed *enumValueDesc) Number() pref.EnumNumber            { return ed.number }
+func (ed *enumValueDesc) Format(s fmt.State, r rune)         { pfmt.FormatDesc(s, r, ed) }
+func (ed *enumValueDesc) ProtoType(pref.EnumValueDescriptor) {}
+
+type (
+	messageDesc struct {
+		baseDesc
+
+		enums      enumDescs
+		messages   messageDescs
+		extensions extensionDescs
+
+		lazy *messageLazy // protected by fileDesc.once
+	}
+	messageLazy struct {
+		typ reflect.Type
+		new func() pref.Message
+
+		isMapEntry      bool
+		fields          fieldDescs
+		oneofs          oneofDescs
+		resvNames       names
+		resvRanges      fieldRanges
+		reqNumbers      fieldNumbers
+		extRanges       fieldRanges
+		extRangeOptions [][]byte
+		options         []byte
+	}
+	fieldDesc struct {
+		baseDesc
+
+		number      pref.FieldNumber
+		cardinality pref.Cardinality
+		kind        pref.Kind
+		hasJSONName bool
+		jsonName    string
+		hasPacked   bool
+		isPacked    bool
+		isWeak      bool
+		isMap       bool
+		defVal      defaultValue
+		oneofType   pref.OneofDescriptor
+		enumType    pref.EnumDescriptor
+		messageType pref.MessageDescriptor
+		options     []byte
+	}
+	oneofDesc struct {
+		baseDesc
+
+		fields  oneofFields
+		options []byte
+	}
+)
+
+func (md *messageDesc) GoType() reflect.Type { return md.lazyInit().typ }
+func (md *messageDesc) New() pref.Message    { return md.lazyInit().new() }
+func (md *messageDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.MessageOptions(), md.lazyInit().options)
+}
+func (md *messageDesc) IsMapEntry() bool                   { return md.lazyInit().isMapEntry }
+func (md *messageDesc) Fields() pref.FieldDescriptors      { return &md.lazyInit().fields }
+func (md *messageDesc) Oneofs() pref.OneofDescriptors      { return &md.lazyInit().oneofs }
+func (md *messageDesc) ReservedNames() pref.Names          { return &md.lazyInit().resvNames }
+func (md *messageDesc) ReservedRanges() pref.FieldRanges   { return &md.lazyInit().resvRanges }
+func (md *messageDesc) RequiredNumbers() pref.FieldNumbers { return &md.lazyInit().reqNumbers }
+func (md *messageDesc) ExtensionRanges() pref.FieldRanges  { return &md.lazyInit().extRanges }
+func (md *messageDesc) ExtensionRangeOptions(i int) pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.ExtensionRangeOptions(), md.lazyInit().extRangeOptions[i])
+}
+func (md *messageDesc) Enums() pref.EnumDescriptors           { return &md.enums }
+func (md *messageDesc) Messages() pref.MessageDescriptors     { return &md.messages }
+func (md *messageDesc) Extensions() pref.ExtensionDescriptors { return &md.extensions }
+func (md *messageDesc) Format(s fmt.State, r rune)            { pfmt.FormatDesc(s, r, md) }
+func (md *messageDesc) ProtoType(pref.MessageDescriptor)      {}
+func (md *messageDesc) lazyInit() *messageLazy {
+	md.parentFile.lazyInit() // implicitly initializes messageLazy
+	return md.lazy
+}
+
+func (fd *fieldDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.FieldOptions(), fd.options)
+}
+func (fd *fieldDesc) Number() pref.FieldNumber                   { return fd.number }
+func (fd *fieldDesc) Cardinality() pref.Cardinality              { return fd.cardinality }
+func (fd *fieldDesc) Kind() pref.Kind                            { return fd.kind }
+func (fd *fieldDesc) HasJSONName() bool                          { return fd.hasJSONName }
+func (fd *fieldDesc) JSONName() string                           { return fd.jsonName }
+func (fd *fieldDesc) IsPacked() bool                             { return fd.isPacked }
+func (fd *fieldDesc) IsWeak() bool                               { return fd.isWeak }
+func (fd *fieldDesc) IsMap() bool                                { return fd.isMap }
+func (fd *fieldDesc) HasDefault() bool                           { return fd.defVal.has }
+func (fd *fieldDesc) Default() pref.Value                        { return fd.defVal.get() }
+func (fd *fieldDesc) DefaultEnumValue() pref.EnumValueDescriptor { return fd.defVal.enum }
+func (fd *fieldDesc) OneofType() pref.OneofDescriptor            { return fd.oneofType }
+func (fd *fieldDesc) ExtendedType() pref.MessageDescriptor       { return nil }
+func (fd *fieldDesc) EnumType() pref.EnumDescriptor              { return fd.enumType }
+func (fd *fieldDesc) MessageType() pref.MessageDescriptor        { return fd.messageType }
+func (fd *fieldDesc) Format(s fmt.State, r rune)                 { pfmt.FormatDesc(s, r, fd) }
+func (fd *fieldDesc) ProtoType(pref.FieldDescriptor)             {}
+
+func (od *oneofDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.OneofOptions(), od.options)
+}
+func (od *oneofDesc) Fields() pref.FieldDescriptors  { return &od.fields }
+func (od *oneofDesc) Format(s fmt.State, r rune)     { pfmt.FormatDesc(s, r, od) }
+func (od *oneofDesc) ProtoType(pref.OneofDescriptor) {}
+
+type (
+	extensionDesc struct {
+		baseDesc
+
+		number       pref.FieldNumber
+		extendedType pref.MessageDescriptor
+
+		lazy *extensionLazy // protected by fileDesc.once
+	}
+	extensionLazy struct {
+		typ         reflect.Type
+		new         func() pref.Value
+		valueOf     func(interface{}) pref.Value
+		interfaceOf func(pref.Value) interface{}
+
+		cardinality pref.Cardinality
+		kind        pref.Kind
+		// Extensions should not have JSON names, but older versions of protoc
+		// used to set one on the descriptor. Preserve it for now to maintain
+		// the property that protoc 3.6.1 descriptors can round-trip through
+		// this package losslessly.
+		//
+		// TODO: Consider whether to drop JSONName parsing from extensions.
+		hasJSONName bool
+		jsonName    string
+		isPacked    bool
+		defVal      defaultValue
+		enumType    pref.EnumType
+		messageType pref.MessageType
+		options     []byte
+	}
+)
+
+func (xd *extensionDesc) GoType() reflect.Type                 { return xd.lazyInit().typ }
+func (xd *extensionDesc) New() pref.Value                      { return xd.lazyInit().new() }
+func (xd *extensionDesc) ValueOf(v interface{}) pref.Value     { return xd.lazyInit().valueOf(v) }
+func (xd *extensionDesc) InterfaceOf(v pref.Value) interface{} { return xd.lazyInit().interfaceOf(v) }
+func (xd *extensionDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.FieldOptions(), xd.lazyInit().options)
+}
+func (xd *extensionDesc) Number() pref.FieldNumber                   { return xd.number }
+func (xd *extensionDesc) Cardinality() pref.Cardinality              { return xd.lazyInit().cardinality }
+func (xd *extensionDesc) Kind() pref.Kind                            { return xd.lazyInit().kind }
+func (xd *extensionDesc) HasJSONName() bool                          { return xd.lazyInit().hasJSONName }
+func (xd *extensionDesc) JSONName() string                           { return xd.lazyInit().jsonName }
+func (xd *extensionDesc) IsPacked() bool                             { return xd.lazyInit().isPacked }
+func (xd *extensionDesc) IsWeak() bool                               { return false }
+func (xd *extensionDesc) IsMap() bool                                { return false }
+func (xd *extensionDesc) HasDefault() bool                           { return xd.lazyInit().defVal.has }
+func (xd *extensionDesc) Default() pref.Value                        { return xd.lazyInit().defVal.get() }
+func (xd *extensionDesc) DefaultEnumValue() pref.EnumValueDescriptor { return xd.lazyInit().defVal.enum }
+func (xd *extensionDesc) OneofType() pref.OneofDescriptor            { return nil }
+func (xd *extensionDesc) ExtendedType() pref.MessageDescriptor       { return xd.extendedType }
+func (xd *extensionDesc) EnumType() pref.EnumDescriptor              { return xd.lazyInit().enumType }
+func (xd *extensionDesc) MessageType() pref.MessageDescriptor        { return xd.lazyInit().messageType }
+func (xd *extensionDesc) Format(s fmt.State, r rune)                 { pfmt.FormatDesc(s, r, xd) }
+func (xd *extensionDesc) ProtoType(pref.FieldDescriptor)             {}
+func (xd *extensionDesc) ProtoInternal(pragma.DoNotImplement)        {}
+func (xd *extensionDesc) lazyInit() *extensionLazy {
+	xd.parentFile.lazyInit() // implicitly initializes extensionLazy
+	return xd.lazy
+}
+
+type (
+	serviceDesc struct {
+		baseDesc
+
+		lazy *serviceLazy // protected by fileDesc.once
+	}
+	serviceLazy struct {
+		methods methodDescs
+		options []byte
+	}
+	methodDesc struct {
+		baseDesc
+
+		inputType         pref.MessageDescriptor
+		outputType        pref.MessageDescriptor
+		isStreamingClient bool
+		isStreamingServer bool
+		options           []byte
+	}
+)
+
+func (sd *serviceDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.ServiceOptions(), sd.lazyInit().options)
+}
+func (sd *serviceDesc) Methods() pref.MethodDescriptors     { return &sd.lazyInit().methods }
+func (sd *serviceDesc) Format(s fmt.State, r rune)          { pfmt.FormatDesc(s, r, sd) }
+func (sd *serviceDesc) ProtoType(pref.ServiceDescriptor)    {}
+func (sd *serviceDesc) ProtoInternal(pragma.DoNotImplement) {}
+func (sd *serviceDesc) lazyInit() *serviceLazy {
+	sd.parentFile.lazyInit() // implicitly initializes serviceLazy
+	return sd.lazy
+}
+
+func (md *methodDesc) Options() pref.OptionsMessage {
+	return unmarshalOptions(ptype.X.MethodOptions(), md.options)
+}
+func (md *methodDesc) InputType() pref.MessageDescriptor   { return md.inputType }
+func (md *methodDesc) OutputType() pref.MessageDescriptor  { return md.outputType }
+func (md *methodDesc) IsStreamingClient() bool             { return md.isStreamingClient }
+func (md *methodDesc) IsStreamingServer() bool             { return md.isStreamingServer }
+func (md *methodDesc) Format(s fmt.State, r rune)          { pfmt.FormatDesc(s, r, md) }
+func (md *methodDesc) ProtoType(pref.MethodDescriptor)     {}
+func (md *methodDesc) ProtoInternal(pragma.DoNotImplement) {}
+
+type baseDesc struct {
+	parentFile *fileDesc
+	parent     pref.Descriptor
+	index      int
+	fullName
+}
+
+func (d *baseDesc) Parent() (pref.Descriptor, bool)     { return d.parent, true }
+func (d *baseDesc) Index() int                          { return d.index }
+func (d *baseDesc) Syntax() pref.Syntax                 { return d.parentFile.Syntax() }
+func (d *baseDesc) IsPlaceholder() bool                 { return false }
+func (d *baseDesc) ProtoInternal(pragma.DoNotImplement) {}
+
+type fullName struct {
+	shortLen int
+	fullName pref.FullName
+}
+
+func (s *fullName) Name() pref.Name         { return pref.Name(s.fullName[len(s.fullName)-s.shortLen:]) }
+func (s *fullName) FullName() pref.FullName { return s.fullName }
+
+func unmarshalOptions(p pref.OptionsMessage, b []byte) pref.OptionsMessage {
+	if b != nil {
+		// TODO: Consider caching the unmarshaled options message.
+		p = reflect.New(reflect.TypeOf(p).Elem()).Interface().(pref.OptionsMessage)
+		if err := proto.Unmarshal(b, p.(proto.Message)); err != nil {
+			panic(err)
+		}
+	}
+	return p.(proto.Message)
+}
diff --git a/internal/fileinit/desc_init.go b/internal/fileinit/desc_init.go
new file mode 100644
index 0000000..4955f4d
--- /dev/null
+++ b/internal/fileinit/desc_init.go
@@ -0,0 +1,356 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package fileinit
+
+import (
+	wire "github.com/golang/protobuf/v2/internal/encoding/wire"
+	pref "github.com/golang/protobuf/v2/reflect/protoreflect"
+)
+
+func newFileDesc(fb FileBuilder) *fileDesc {
+	file := &fileDesc{fileInit: fileInit{
+		RawDescriptor:     fb.RawDescriptor,
+		GoTypes:           fb.GoTypes,
+		DependencyIndexes: fb.DependencyIndexes,
+	}}
+	file.initDecls(len(fb.EnumOutputTypes), len(fb.MessageOutputTypes), len(fb.ExtensionOutputTypes))
+	file.unmarshalSeed(fb.RawDescriptor)
+
+	// Extended message dependencies are eagerly handled since registration
+	// needs this information at program init time.
+	for i := range file.allExtensions {
+		xd := &file.allExtensions[i]
+		xd.extendedType = file.popMessageDependency()
+	}
+
+	file.checkDecls()
+	return file
+}
+
+// initDecls pre-allocates slices for the exact number of enums, messages
+// (excluding map entries), and extensions declared in the proto file.
+// This is done to avoid regrowing the slice, which would change the address
+// for any previously seen declaration.
+//
+// The alloc methods "allocates" slices by pulling from the capacity.
+func (fd *fileDecls) initDecls(numEnums, numMessages, numExtensions int) {
+	*fd = fileDecls{
+		allEnums:      make([]enumDesc, 0, numEnums),
+		allMessages:   make([]messageDesc, 0, numMessages),
+		allExtensions: make([]extensionDesc, 0, numExtensions),
+	}
+}
+
+func (fd *fileDecls) allocEnums(n int) []enumDesc {
+	total := len(fd.allEnums)
+	es := fd.allEnums[total : total+n]
+	fd.allEnums = fd.allEnums[:total+n]
+	return es
+}
+func (fd *fileDecls) allocMessages(n int) []messageDesc {
+	total := len(fd.allMessages)
+	ms := fd.allMessages[total : total+n]
+	fd.allMessages = fd.allMessages[:total+n]
+	return ms
+}
+func (fd *fileDecls) allocExtensions(n int) []extensionDesc {
+	total := len(fd.allExtensions)
+	xs := fd.allExtensions[total : total+n]
+	fd.allExtensions = fd.allExtensions[:total+n]
+	return xs
+}
+
+// checkDecls performs a sanity check that the expected number of expected
+// declarations matches the number that were found in the descriptor proto.
+func (fd *fileDecls) checkDecls() {
+	if len(fd.allEnums) != cap(fd.allEnums) ||
+		len(fd.allMessages) != cap(fd.allMessages) ||
+		len(fd.allExtensions) != cap(fd.allExtensions) {
+		panic("mismatching cardinality")
+	}
+}
+
+func (fd *fileDesc) unmarshalSeed(b []byte) {
+	nb := getNameBuilder()
+	defer putNameBuilder(nb)
+
+	var prevField pref.FieldNumber
+	var numEnums, numMessages, numExtensions, numServices int
+	var posEnums, posMessages, posExtensions, posServices int
+	b0 := b
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case fileDesc_Name:
+				fd.path = nb.MakeString(v)
+			case fileDesc_Package:
+				fd.protoPackage = pref.FullName(nb.MakeString(v))
+			case fileDesc_Enums:
+				if prevField != fileDesc_Enums {
+					if numEnums > 0 {
+						panic("non-contiguous repeated field")
+					}
+					posEnums = len(b0) - len(b) - n - m
+				}
+				numEnums++
+			case fileDesc_Messages:
+				if prevField != fileDesc_Messages {
+					if numMessages > 0 {
+						panic("non-contiguous repeated field")
+					}
+					posMessages = len(b0) - len(b) - n - m
+				}
+				numMessages++
+			case fileDesc_Extensions:
+				if prevField != fileDesc_Extensions {
+					if numExtensions > 0 {
+						panic("non-contiguous repeated field")
+					}
+					posExtensions = len(b0) - len(b) - n - m
+				}
+				numExtensions++
+			case fileDesc_Services:
+				if prevField != fileDesc_Services {
+					if numServices > 0 {
+						panic("non-contiguous repeated field")
+					}
+					posServices = len(b0) - len(b) - n - m
+				}
+				numServices++
+			}
+			prevField = num
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+			prevField = -1 // ignore known field numbers of unknown wire type
+		}
+	}
+
+	// Must allocate all declarations before parsing each descriptor type
+	// to ensure we handled all descriptors in "flattened ordering".
+	if numEnums > 0 {
+		fd.enums.list = fd.allocEnums(numEnums)
+	}
+	if numMessages > 0 {
+		fd.messages.list = fd.allocMessages(numMessages)
+	}
+	if numExtensions > 0 {
+		fd.extensions.list = fd.allocExtensions(numExtensions)
+	}
+	if numServices > 0 {
+		fd.services.list = make([]serviceDesc, numServices)
+	}
+
+	if numEnums > 0 {
+		b := b0[posEnums:]
+		for i := range fd.enums.list {
+			_, n := wire.ConsumeVarint(b)
+			v, m := wire.ConsumeBytes(b[n:])
+			fd.enums.list[i].unmarshalSeed(v, nb, fd, fd, i)
+			b = b[n+m:]
+		}
+	}
+	if numMessages > 0 {
+		b := b0[posMessages:]
+		for i := range fd.messages.list {
+			_, n := wire.ConsumeVarint(b)
+			v, m := wire.ConsumeBytes(b[n:])
+			fd.messages.list[i].unmarshalSeed(v, nb, fd, fd, i)
+			b = b[n+m:]
+		}
+	}
+	if numExtensions > 0 {
+		b := b0[posExtensions:]
+		for i := range fd.extensions.list {
+			_, n := wire.ConsumeVarint(b)
+			v, m := wire.ConsumeBytes(b[n:])
+			fd.extensions.list[i].unmarshalSeed(v, nb, fd, fd, i)
+			b = b[n+m:]
+		}
+	}
+	if numServices > 0 {
+		b := b0[posServices:]
+		for i := range fd.services.list {
+			_, n := wire.ConsumeVarint(b)
+			v, m := wire.ConsumeBytes(b[n:])
+			fd.services.list[i].unmarshalSeed(v, nb, fd, fd, i)
+			b = b[n+m:]
+		}
+	}
+}
+
+func (ed *enumDesc) unmarshalSeed(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	ed.parentFile = pf
+	ed.parent = pd
+	ed.index = i
+
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case enumDesc_Name:
+				ed.fullName = nb.AppendFullName(pd.FullName(), v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+}
+
+func (md *messageDesc) unmarshalSeed(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	md.parentFile = pf
+	md.parent = pd
+	md.index = i
+
+	var prevField pref.FieldNumber
+	var numEnums, numMessages, numExtensions int
+	var posEnums, posMessages, posExtensions int
+	b0 := b
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case messageDesc_Name:
+				md.fullName = nb.AppendFullName(pd.FullName(), v)
+			case messageDesc_Enums:
+				if prevField != messageDesc_Enums {
+					if numEnums > 0 {
+						panic("non-contiguous repeated field")
+					}
+					posEnums = len(b0) - len(b) - n - m
+				}
+				numEnums++
+			case messageDesc_Messages:
+				if prevField != messageDesc_Messages {
+					if numMessages > 0 {
+						panic("non-contiguous repeated field")
+					}
+					posMessages = len(b0) - len(b) - n - m
+				}
+				numMessages++
+			case messageDesc_Extensions:
+				if prevField != messageDesc_Extensions {
+					if numExtensions > 0 {
+						panic("non-contiguous repeated field")
+					}
+					posExtensions = len(b0) - len(b) - n - m
+				}
+				numExtensions++
+			}
+			prevField = num
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+			prevField = -1 // ignore known field numbers of unknown wire type
+		}
+	}
+
+	// Must allocate all declarations before parsing each descriptor type
+	// to ensure we handled all descriptors in "flattened ordering".
+	if numEnums > 0 {
+		md.enums.list = md.parentFile.allocEnums(numEnums)
+	}
+	if numMessages > 0 {
+		md.messages.list = md.parentFile.allocMessages(numMessages)
+	}
+	if numExtensions > 0 {
+		md.extensions.list = md.parentFile.allocExtensions(numExtensions)
+	}
+
+	if numEnums > 0 {
+		b := b0[posEnums:]
+		for i := range md.enums.list {
+			_, n := wire.ConsumeVarint(b)
+			v, m := wire.ConsumeBytes(b[n:])
+			md.enums.list[i].unmarshalSeed(v, nb, pf, md, i)
+			b = b[n+m:]
+		}
+	}
+	if numMessages > 0 {
+		b := b0[posMessages:]
+		for i := range md.messages.list {
+			_, n := wire.ConsumeVarint(b)
+			v, m := wire.ConsumeBytes(b[n:])
+			md.messages.list[i].unmarshalSeed(v, nb, pf, md, i)
+			b = b[n+m:]
+		}
+	}
+	if numExtensions > 0 {
+		b := b0[posExtensions:]
+		for i := range md.extensions.list {
+			_, n := wire.ConsumeVarint(b)
+			v, m := wire.ConsumeBytes(b[n:])
+			md.extensions.list[i].unmarshalSeed(v, nb, pf, md, i)
+			b = b[n+m:]
+		}
+	}
+}
+
+func (xd *extensionDesc) unmarshalSeed(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	xd.parentFile = pf
+	xd.parent = pd
+	xd.index = i
+
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case fieldDesc_Number:
+				xd.number = pref.FieldNumber(v)
+			}
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case fieldDesc_Name:
+				xd.fullName = nb.AppendFullName(pd.FullName(), v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+}
+
+func (sd *serviceDesc) unmarshalSeed(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	sd.parentFile = pf
+	sd.parent = pd
+	sd.index = i
+
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case serviceDesc_Name:
+				sd.fullName = nb.AppendFullName(pd.FullName(), v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+}
diff --git a/internal/fileinit/desc_lazy.go b/internal/fileinit/desc_lazy.go
new file mode 100644
index 0000000..fc2235c
--- /dev/null
+++ b/internal/fileinit/desc_lazy.go
@@ -0,0 +1,878 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package fileinit
+
+import (
+	"bytes"
+	"fmt"
+	"reflect"
+
+	defval "github.com/golang/protobuf/v2/internal/encoding/defval"
+	wire "github.com/golang/protobuf/v2/internal/encoding/wire"
+	pimpl "github.com/golang/protobuf/v2/internal/impl"
+	pvalue "github.com/golang/protobuf/v2/internal/value"
+	pref "github.com/golang/protobuf/v2/reflect/protoreflect"
+	ptype "github.com/golang/protobuf/v2/reflect/prototype"
+)
+
+func (file *fileDesc) lazyInit() *fileLazy {
+	file.once.Do(func() {
+		file.unmarshalFull(file.RawDescriptor)
+		file.resolveImports()
+		file.resolveEnums()
+		file.resolveMessages()
+		file.resolveExtensions()
+		file.resolveServices()
+		file.finishInit()
+	})
+	return file.lazy
+}
+
+func (file *fileDesc) resolveImports() {
+	// TODO: Resolve file dependencies.
+}
+
+func (file *fileDesc) resolveEnums() {
+	enumDecls := file.GoTypes[:len(file.allEnums)]
+	for i := range file.allEnums {
+		ed := &file.allEnums[i]
+
+		// Associate the EnumType with a concrete Go type.
+		enumCache := map[pref.EnumNumber]pref.Enum{}
+		ed.lazy.typ = reflect.TypeOf(enumDecls[i])
+		ed.lazy.new = func(n pref.EnumNumber) pref.Enum {
+			if v, ok := enumCache[n]; ok {
+				return v
+			}
+			v := reflect.New(ed.lazy.typ).Elem()
+			v.SetInt(int64(n))
+			return v.Interface().(pref.Enum)
+		}
+		for i := range ed.lazy.values.list {
+			n := ed.lazy.values.list[i].number
+			enumCache[n] = ed.lazy.new(n)
+		}
+	}
+}
+
+func (file *fileDesc) resolveMessages() {
+	messageDecls := file.GoTypes[len(file.allEnums):]
+	for i := range file.allMessages {
+		md := &file.allMessages[i]
+
+		// Associate the MessageType with a concrete Go type.
+		//
+		// Note that descriptors for map entries, which have no associated
+		// Go type, also implement the protoreflect.MessageType interface,
+		// but have a GoType accessor that reports nil. Calling New results
+		// in a panic, which is sensible behavior.
+		md.lazy.typ = reflect.TypeOf(messageDecls[i])
+		md.lazy.new = func() pref.Message {
+			t := md.lazy.typ.Elem()
+			return reflect.New(t).Interface().(pref.ProtoMessage).ProtoReflect()
+		}
+
+		// Resolve message field dependencies.
+		for j := range md.lazy.fields.list {
+			fd := &md.lazy.fields.list[j]
+			if fd.isWeak {
+				continue
+			}
+
+			switch fd.kind {
+			case pref.EnumKind:
+				fd.enumType = file.popEnumDependency()
+			case pref.MessageKind, pref.GroupKind:
+				fd.messageType = file.popMessageDependency()
+			}
+			fd.isMap = file.isMapEntry(fd.messageType)
+			if !fd.hasPacked && file.lazy.syntax != pref.Proto2 && fd.cardinality == pref.Repeated {
+				switch fd.kind {
+				case pref.StringKind, pref.BytesKind, pref.MessageKind, pref.GroupKind:
+					fd.isPacked = false
+				default:
+					fd.isPacked = true
+				}
+			}
+			fd.defVal.lazyInit(fd.kind, file.enumValuesOf(fd.enumType))
+		}
+	}
+}
+
+func (file *fileDesc) resolveExtensions() {
+	for i := range file.allExtensions {
+		xd := &file.allExtensions[i]
+
+		// Associate the ExtensionType with a concrete Go type.
+		var typ reflect.Type
+		switch xd.lazy.kind {
+		case pref.EnumKind, pref.MessageKind, pref.GroupKind:
+			typ = reflect.TypeOf(file.GoTypes[file.DependencyIndexes[0]])
+		default:
+			typ = goTypeForPBKind[xd.lazy.kind]
+		}
+		switch xd.lazy.cardinality {
+		case pref.Optional:
+			switch xd.lazy.kind {
+			case pref.EnumKind:
+				xd.lazy.typ = typ
+				xd.lazy.new = func() pref.Value {
+					return xd.lazy.defVal.get()
+				}
+				xd.lazy.valueOf = func(v interface{}) pref.Value {
+					ev := v.(pref.Enum)
+					return pref.ValueOf(ev.Number())
+				}
+				xd.lazy.interfaceOf = func(pv pref.Value) interface{} {
+					return xd.lazy.enumType.New(pv.Enum())
+				}
+			case pref.MessageKind, pref.GroupKind:
+				xd.lazy.typ = typ
+				xd.lazy.new = func() pref.Value {
+					return pref.ValueOf(xd.lazy.messageType.New())
+				}
+				xd.lazy.valueOf = func(v interface{}) pref.Value {
+					mv := v.(pref.ProtoMessage).ProtoReflect()
+					return pref.ValueOf(mv)
+				}
+				xd.lazy.interfaceOf = func(pv pref.Value) interface{} {
+					return pv.Message().Interface()
+				}
+			default:
+				xd.lazy.typ = goTypeForPBKind[xd.lazy.kind]
+				xd.lazy.new = func() pref.Value {
+					return xd.lazy.defVal.get()
+				}
+				xd.lazy.valueOf = func(v interface{}) pref.Value {
+					return pref.ValueOf(v)
+				}
+				xd.lazy.interfaceOf = func(pv pref.Value) interface{} {
+					return pv.Interface()
+				}
+			}
+		case pref.Repeated:
+			c := pvalue.NewConverter(typ, xd.lazy.kind)
+			xd.lazy.typ = reflect.PtrTo(reflect.SliceOf(typ))
+			xd.lazy.new = func() pref.Value {
+				v := reflect.New(xd.lazy.typ.Elem()).Interface()
+				return pref.ValueOf(pvalue.ListOf(v, c))
+			}
+			xd.lazy.valueOf = func(v interface{}) pref.Value {
+				return pref.ValueOf(pvalue.ListOf(v, c))
+			}
+			xd.lazy.interfaceOf = func(pv pref.Value) interface{} {
+				return pv.List().(pvalue.Unwrapper).ProtoUnwrap()
+			}
+		default:
+			panic(fmt.Sprintf("invalid cardinality: %v", xd.lazy.cardinality))
+		}
+
+		// Resolve extension field dependency.
+		switch xd.lazy.kind {
+		case pref.EnumKind:
+			xd.lazy.enumType = file.popEnumDependency()
+		case pref.MessageKind, pref.GroupKind:
+			xd.lazy.messageType = file.popMessageDependency()
+		}
+		xd.lazy.defVal.lazyInit(xd.lazy.kind, file.enumValuesOf(xd.lazy.enumType))
+	}
+}
+
+var goTypeForPBKind = map[pref.Kind]reflect.Type{
+	pref.BoolKind:     reflect.TypeOf(bool(false)),
+	pref.Int32Kind:    reflect.TypeOf(int32(0)),
+	pref.Sint32Kind:   reflect.TypeOf(int32(0)),
+	pref.Sfixed32Kind: reflect.TypeOf(int32(0)),
+	pref.Int64Kind:    reflect.TypeOf(int64(0)),
+	pref.Sint64Kind:   reflect.TypeOf(int64(0)),
+	pref.Sfixed64Kind: reflect.TypeOf(int64(0)),
+	pref.Uint32Kind:   reflect.TypeOf(uint32(0)),
+	pref.Fixed32Kind:  reflect.TypeOf(uint32(0)),
+	pref.Uint64Kind:   reflect.TypeOf(uint64(0)),
+	pref.Fixed64Kind:  reflect.TypeOf(uint64(0)),
+	pref.FloatKind:    reflect.TypeOf(float32(0)),
+	pref.DoubleKind:   reflect.TypeOf(float64(0)),
+	pref.StringKind:   reflect.TypeOf(string("")),
+	pref.BytesKind:    reflect.TypeOf([]byte(nil)),
+}
+
+func (file *fileDesc) resolveServices() {
+	for i := range file.services.list {
+		sd := &file.services.list[i]
+
+		// Resolve method dependencies.
+		for j := range sd.lazy.methods.list {
+			md := &sd.lazy.methods.list[j]
+			md.inputType = file.popMessageDependency()
+			md.outputType = file.popMessageDependency()
+		}
+	}
+}
+
+// isMapEntry reports whether the message is a map entry, being careful to
+// avoid calling the IsMapEntry method if the message is declared
+// within the same file (which would cause a recursive init deadlock).
+func (fd *fileDesc) isMapEntry(md pref.MessageDescriptor) bool {
+	if md == nil {
+		return false
+	}
+	if md, ok := md.(*messageDesc); ok && md.parentFile == fd {
+		return md.lazy.isMapEntry
+	}
+	return md.IsMapEntry()
+}
+
+// enumValuesOf retrieves the list of enum values for the given enum,
+// being careful to avoid calling the Values method if the enum is declared
+// within the same file (which would cause a recursive init deadlock).
+func (fd *fileDesc) enumValuesOf(ed pref.EnumDescriptor) pref.EnumValueDescriptors {
+	if ed == nil {
+		return nil
+	}
+	if ed, ok := ed.(*enumDesc); ok && ed.parentFile == fd {
+		return &ed.lazy.values
+	}
+	return ed.Values()
+}
+
+func (fd *fileDesc) popEnumDependency() pref.EnumType {
+	depIdx := fd.popDependencyIndex()
+	if depIdx < len(fd.allEnums)+len(fd.allMessages) {
+		return &fd.allEnums[depIdx]
+	} else {
+		return pimpl.Export{}.EnumTypeOf(fd.GoTypes[depIdx])
+	}
+}
+
+func (fd *fileDesc) popMessageDependency() pref.MessageType {
+	depIdx := fd.popDependencyIndex()
+	if depIdx < len(fd.allEnums)+len(fd.allMessages) {
+		return &fd.allMessages[depIdx-len(fd.allEnums)]
+	} else {
+		return pimpl.Export{}.MessageTypeOf(fd.GoTypes[depIdx])
+	}
+}
+
+func (fi *fileInit) popDependencyIndex() int {
+	depIdx := fi.DependencyIndexes[0]
+	fi.DependencyIndexes = fi.DependencyIndexes[1:]
+	return int(depIdx)
+}
+
+func (fi *fileInit) finishInit() {
+	if len(fi.DependencyIndexes) > 0 {
+		panic("unused dependencies")
+	}
+	*fi = fileInit{} // clear fileInit for GC to reclaim resources
+}
+
+type defaultValue struct {
+	has   bool
+	val   pref.Value
+	enum  pref.EnumValueDescriptor
+	check func() // only set for non-empty bytes
+}
+
+func (dv *defaultValue) get() pref.Value {
+	if dv.check != nil {
+		dv.check()
+	}
+	return dv.val
+}
+
+func (dv *defaultValue) lazyInit(k pref.Kind, eds pref.EnumValueDescriptors) {
+	if dv.has {
+		switch k {
+		case pref.EnumKind:
+			// File descriptors always store default enums by name.
+			dv.enum = eds.ByName(pref.Name(dv.val.String()))
+			dv.val = pref.ValueOf(dv.enum.Number())
+		case pref.BytesKind:
+			// Store a copy of the default bytes, so that we can detect
+			// accidental mutations of the original value.
+			b := append([]byte(nil), dv.val.Bytes()...)
+			dv.check = func() {
+				if !bytes.Equal(b, dv.val.Bytes()) {
+					// TODO: Avoid panic if we're running with the race detector
+					// and instead spawn a goroutine that periodically resets
+					// this value back to the original to induce a race.
+					panic("detected mutation on the default bytes")
+				}
+			}
+		}
+	} else {
+		switch k {
+		case pref.BoolKind:
+			dv.val = pref.ValueOf(false)
+		case pref.Int32Kind, pref.Sint32Kind, pref.Sfixed32Kind:
+			dv.val = pref.ValueOf(int32(0))
+		case pref.Int64Kind, pref.Sint64Kind, pref.Sfixed64Kind:
+			dv.val = pref.ValueOf(int64(0))
+		case pref.Uint32Kind, pref.Fixed32Kind:
+			dv.val = pref.ValueOf(uint32(0))
+		case pref.Uint64Kind, pref.Fixed64Kind:
+			dv.val = pref.ValueOf(uint64(0))
+		case pref.FloatKind:
+			dv.val = pref.ValueOf(float32(0))
+		case pref.DoubleKind:
+			dv.val = pref.ValueOf(float64(0))
+		case pref.StringKind:
+			dv.val = pref.ValueOf(string(""))
+		case pref.BytesKind:
+			dv.val = pref.ValueOf([]byte(nil))
+		case pref.EnumKind:
+			dv.enum = eds.Get(0)
+			dv.val = pref.ValueOf(dv.enum.Number())
+		}
+	}
+}
+
+func (fd *fileDesc) unmarshalFull(b []byte) {
+	nb := getNameBuilder()
+	defer putNameBuilder(nb)
+
+	var hasSyntax bool
+	var enumIdx, messageIdx, extensionIdx, serviceIdx int
+	fd.lazy = &fileLazy{byName: make(map[pref.FullName]pref.Descriptor)}
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case fileDesc_PublicImports:
+				fd.lazy.imports[v].IsPublic = true
+			case fileDesc_WeakImports:
+				fd.lazy.imports[v].IsWeak = true
+			}
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case fileDesc_Syntax:
+				hasSyntax = true
+				switch string(v) {
+				case "proto2":
+					fd.lazy.syntax = pref.Proto2
+				case "proto3":
+					fd.lazy.syntax = pref.Proto3
+				default:
+					panic("invalid syntax")
+				}
+			case fileDesc_Imports:
+				fd.lazy.imports = append(fd.lazy.imports, pref.FileImport{
+					FileDescriptor: ptype.PlaceholderFile(nb.MakeString(v), ""),
+				})
+			case fileDesc_Enums:
+				fd.enums.list[enumIdx].unmarshalFull(v, nb)
+				enumIdx++
+			case fileDesc_Messages:
+				fd.messages.list[messageIdx].unmarshalFull(v, nb)
+				messageIdx++
+			case fileDesc_Extensions:
+				fd.extensions.list[extensionIdx].unmarshalFull(v, nb)
+				extensionIdx++
+			case fileDesc_Services:
+				fd.services.list[serviceIdx].unmarshalFull(v, nb)
+				serviceIdx++
+			case fileDesc_Options:
+				fd.lazy.options = append(fd.lazy.options, v...)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	// If syntax is missing, it is assumed to be proto2.
+	if !hasSyntax {
+		fd.lazy.syntax = pref.Proto2
+	}
+}
+
+func (ed *enumDesc) unmarshalFull(b []byte, nb *nameBuilder) {
+	var rawValues [][]byte
+	ed.lazy = new(enumLazy)
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case enumDesc_Values:
+				rawValues = append(rawValues, v)
+			case enumDesc_ReservedNames:
+				ed.lazy.resvNames.list = append(ed.lazy.resvNames.list, pref.Name(nb.MakeString(v)))
+			case enumDesc_ReservedRanges:
+				ed.lazy.resvRanges.list = append(ed.lazy.resvRanges.list, unmarshalEnumReservedRange(v))
+			case enumDesc_Options:
+				ed.lazy.options = append(ed.lazy.options, v...)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	if len(rawValues) > 0 {
+		ed.lazy.values.list = make([]enumValueDesc, len(rawValues))
+		for i, b := range rawValues {
+			ed.lazy.values.list[i].unmarshalFull(b, nb, ed.parentFile, ed, i)
+		}
+	}
+
+	ed.parentFile.lazy.byName[ed.FullName()] = ed
+}
+
+func unmarshalEnumReservedRange(b []byte) (r [2]pref.EnumNumber) {
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case enumReservedRange_Start:
+				r[0] = pref.EnumNumber(v)
+			case enumReservedRange_End:
+				r[1] = pref.EnumNumber(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+	return r
+}
+
+func (vd *enumValueDesc) unmarshalFull(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	vd.parentFile = pf
+	vd.parent = pd
+	vd.index = i
+
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case enumValueDesc_Number:
+				vd.number = pref.EnumNumber(v)
+			}
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case enumValueDesc_Name:
+				vd.fullName = nb.AppendFullName(pd.FullName(), v)
+			case enumValueDesc_Options:
+				vd.options = append(vd.options, v...)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	vd.parentFile.lazy.byName[vd.FullName()] = vd
+}
+
+func (md *messageDesc) unmarshalFull(b []byte, nb *nameBuilder) {
+	var rawFields, rawOneofs [][]byte
+	var enumIdx, messageIdx, extensionIdx int
+	md.lazy = new(messageLazy)
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case messageDesc_Fields:
+				rawFields = append(rawFields, v)
+			case messageDesc_Oneofs:
+				rawOneofs = append(rawOneofs, v)
+			case messageDesc_ReservedNames:
+				md.lazy.resvNames.list = append(md.lazy.resvNames.list, pref.Name(nb.MakeString(v)))
+			case messageDesc_ReservedRanges:
+				md.lazy.resvRanges.list = append(md.lazy.resvRanges.list, unmarshalMessageReservedRange(v))
+			case messageDesc_ExtensionRanges:
+				r, opts := unmarshalMessageExtensionRange(v)
+				md.lazy.extRanges.list = append(md.lazy.extRanges.list, r)
+				md.lazy.extRangeOptions = append(md.lazy.extRangeOptions, opts)
+			case messageDesc_Enums:
+				md.enums.list[enumIdx].unmarshalFull(v, nb)
+				enumIdx++
+			case messageDesc_Messages:
+				md.messages.list[messageIdx].unmarshalFull(v, nb)
+				messageIdx++
+			case messageDesc_Extensions:
+				md.extensions.list[extensionIdx].unmarshalFull(v, nb)
+				extensionIdx++
+			case messageDesc_Options:
+				md.unmarshalOptions(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	if len(rawFields) > 0 || len(rawOneofs) > 0 {
+		md.lazy.fields.list = make([]fieldDesc, len(rawFields))
+		md.lazy.oneofs.list = make([]oneofDesc, len(rawOneofs))
+		for i, b := range rawFields {
+			fd := &md.lazy.fields.list[i]
+			fd.unmarshalFull(b, nb, md.parentFile, md, i)
+			if fd.cardinality == pref.Required {
+				md.lazy.reqNumbers.list = append(md.lazy.reqNumbers.list, fd.number)
+			}
+		}
+		for i, b := range rawOneofs {
+			od := &md.lazy.oneofs.list[i]
+			od.unmarshalFull(b, nb, md.parentFile, md, i)
+		}
+	}
+
+	md.parentFile.lazy.byName[md.FullName()] = md
+}
+
+func (md *messageDesc) unmarshalOptions(b []byte) {
+	md.lazy.options = append(md.lazy.options, b...)
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case messageOptions_IsMapEntry:
+				md.lazy.isMapEntry = wire.DecodeBool(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+}
+
+func unmarshalMessageReservedRange(b []byte) (r [2]pref.FieldNumber) {
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case messageReservedRange_Start:
+				r[0] = pref.FieldNumber(v)
+			case messageReservedRange_End:
+				r[1] = pref.FieldNumber(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+	return r
+}
+
+func unmarshalMessageExtensionRange(b []byte) (r [2]pref.FieldNumber, opts []byte) {
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case messageExtensionRange_Start:
+				r[0] = pref.FieldNumber(v)
+			case messageExtensionRange_End:
+				r[1] = pref.FieldNumber(v)
+			}
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case messageExtensionRange_Options:
+				opts = append(opts, v...)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+	return r, opts
+}
+
+func (fd *fieldDesc) unmarshalFull(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	fd.parentFile = pf
+	fd.parent = pd
+	fd.index = i
+
+	var rawDefVal []byte
+	var rawTypeName []byte
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case fieldDesc_Number:
+				fd.number = pref.FieldNumber(v)
+			case fieldDesc_Cardinality:
+				fd.cardinality = pref.Cardinality(v)
+			case fieldDesc_Kind:
+				fd.kind = pref.Kind(v)
+			case fieldDesc_OneofIndex:
+				// In messageDesc.UnmarshalFull, we allocate slices for both
+				// the field and oneof descriptors before unmarshaling either
+				// of them. This ensures pointers to slice elements are stable.
+				od := &pd.(*messageDesc).lazy.oneofs.list[v]
+				od.fields.list = append(od.fields.list, fd)
+				if fd.oneofType != nil {
+					panic("oneof type already set")
+				}
+				fd.oneofType = od
+			}
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case fieldDesc_Name:
+				fd.fullName = nb.AppendFullName(pd.FullName(), v)
+			case fieldDesc_JSONName:
+				fd.hasJSONName = true
+				fd.jsonName = nb.MakeString(v)
+			case fieldDesc_Default:
+				fd.defVal.has = true
+				rawDefVal = v
+			case fieldDesc_TypeName:
+				rawTypeName = v
+			case fieldDesc_Options:
+				fd.unmarshalOptions(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	if !fd.hasJSONName {
+		fd.jsonName = nb.MakeJSONName(fd.Name())
+	}
+	if rawDefVal != nil {
+		var err error
+		fd.defVal.val, err = defval.Unmarshal(string(rawDefVal), fd.kind, defval.Descriptor)
+		if err != nil {
+			panic(err)
+		}
+	}
+	if fd.isWeak {
+		if len(rawTypeName) == 0 || rawTypeName[0] != '.' {
+			panic("weak target name must be fully qualified")
+		}
+		fd.messageType = ptype.PlaceholderMessage(pref.FullName(rawTypeName[1:]))
+	}
+
+	fd.parentFile.lazy.byName[fd.FullName()] = fd
+}
+
+func (fd *fieldDesc) unmarshalOptions(b []byte) {
+	fd.options = append(fd.options, b...)
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case fieldOptions_IsPacked:
+				fd.hasPacked = true
+				fd.isPacked = wire.DecodeBool(v)
+			case fieldOptions_IsWeak:
+				fd.isWeak = wire.DecodeBool(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+}
+
+func (od *oneofDesc) unmarshalFull(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	od.parentFile = pf
+	od.parent = pd
+	od.index = i
+
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case oneofDesc_Name:
+				od.fullName = nb.AppendFullName(pd.FullName(), v)
+			case oneofDesc_Options:
+				od.options = append(od.options, v...)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	od.parentFile.lazy.byName[od.FullName()] = od
+}
+
+func (xd *extensionDesc) unmarshalFull(b []byte, nb *nameBuilder) {
+	var rawDefVal []byte
+	xd.lazy = new(extensionLazy)
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case fieldDesc_Cardinality:
+				xd.lazy.cardinality = pref.Cardinality(v)
+			case fieldDesc_Kind:
+				xd.lazy.kind = pref.Kind(v)
+			}
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case fieldDesc_JSONName:
+				xd.lazy.hasJSONName = true
+				xd.lazy.jsonName = nb.MakeString(v)
+			case fieldDesc_Default:
+				xd.lazy.defVal.has = true
+				rawDefVal = v
+			case fieldDesc_Options:
+				xd.unmarshalOptions(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	if rawDefVal != nil {
+		var err error
+		xd.lazy.defVal.val, err = defval.Unmarshal(string(rawDefVal), xd.lazy.kind, defval.Descriptor)
+		if err != nil {
+			panic(err)
+		}
+	}
+
+	xd.parentFile.lazy.byName[xd.FullName()] = xd
+}
+
+func (xd *extensionDesc) unmarshalOptions(b []byte) {
+	xd.lazy.options = append(xd.lazy.options, b...)
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case fieldOptions_IsPacked:
+				xd.lazy.isPacked = wire.DecodeBool(v)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+}
+
+func (sd *serviceDesc) unmarshalFull(b []byte, nb *nameBuilder) {
+	var rawMethods [][]byte
+	sd.lazy = new(serviceLazy)
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case serviceDesc_Methods:
+				rawMethods = append(rawMethods, v)
+			case serviceDesc_Options:
+				sd.lazy.options = append(sd.lazy.options, v...)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	if len(rawMethods) > 0 {
+		sd.lazy.methods.list = make([]methodDesc, len(rawMethods))
+		for i, b := range rawMethods {
+			sd.lazy.methods.list[i].unmarshalFull(b, nb, sd.parentFile, sd, i)
+		}
+	}
+
+	sd.parentFile.lazy.byName[sd.FullName()] = sd
+}
+
+func (md *methodDesc) unmarshalFull(b []byte, nb *nameBuilder, pf *fileDesc, pd pref.Descriptor, i int) {
+	md.parentFile = pf
+	md.parent = pd
+	md.index = i
+
+	for len(b) > 0 {
+		num, typ, n := wire.ConsumeTag(b)
+		b = b[n:]
+		switch typ {
+		case wire.VarintType:
+			v, m := wire.ConsumeVarint(b)
+			b = b[m:]
+			switch num {
+			case methodDesc_IsStreamingClient:
+				md.isStreamingClient = wire.DecodeBool(v)
+			case methodDesc_IsStreamingServer:
+				md.isStreamingServer = wire.DecodeBool(v)
+			}
+		case wire.BytesType:
+			v, m := wire.ConsumeBytes(b)
+			b = b[m:]
+			switch num {
+			case methodDesc_Name:
+				md.fullName = nb.AppendFullName(pd.FullName(), v)
+			case methodDesc_Options:
+				md.options = append(md.options, v...)
+			}
+		default:
+			m := wire.ConsumeFieldValue(num, typ, b)
+			b = b[m:]
+		}
+	}
+
+	md.parentFile.lazy.byName[md.FullName()] = md
+}
diff --git a/internal/fileinit/desc_list.go b/internal/fileinit/desc_list.go
new file mode 100644
index 0000000..6531fcd
--- /dev/null
+++ b/internal/fileinit/desc_list.go
@@ -0,0 +1,189 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package fileinit
+
+import (
+	"fmt"
+	"sort"
+	"sync"
+
+	pragma "github.com/golang/protobuf/v2/internal/pragma"
+	pfmt "github.com/golang/protobuf/v2/internal/typefmt"
+	pref "github.com/golang/protobuf/v2/reflect/protoreflect"
+)
+
+type fileImports []pref.FileImport
+
+func (p *fileImports) Len() int                            { return len(*p) }
+func (p *fileImports) Get(i int) pref.FileImport           { return (*p)[i] }
+func (p *fileImports) Format(s fmt.State, r rune)          { pfmt.FormatList(s, r, p) }
+func (p *fileImports) ProtoInternal(pragma.DoNotImplement) {}
+
+type names struct {
+	list []pref.Name
+	once sync.Once
+	has  map[pref.Name]struct{} // protected by once
+}
+
+func (p *names) Len() int            { return len(p.list) }
+func (p *names) Get(i int) pref.Name { return p.list[i] }
+func (p *names) Has(s pref.Name) bool {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.has = make(map[pref.Name]struct{}, len(p.list))
+			for _, s := range p.list {
+				p.has[s] = struct{}{}
+			}
+		}
+	})
+	_, ok := p.has[s]
+	return ok
+}
+func (p *names) Format(s fmt.State, r rune)          { pfmt.FormatList(s, r, p) }
+func (p *names) ProtoInternal(pragma.DoNotImplement) {}
+
+type enumRanges struct {
+	list   [][2]pref.EnumNumber // start inclusive; end inclusive
+	once   sync.Once
+	sorted [][2]pref.EnumNumber         // protected by once
+	has    map[pref.EnumNumber]struct{} // protected by once
+}
+
+func (p *enumRanges) Len() int                     { return len(p.list) }
+func (p *enumRanges) Get(i int) [2]pref.EnumNumber { return p.list[i] }
+func (p *enumRanges) Has(n pref.EnumNumber) bool {
+	p.once.Do(func() {
+		for _, r := range p.list {
+			if r[0] == r[1]-0 {
+				if p.has == nil {
+					p.has = make(map[pref.EnumNumber]struct{}, len(p.list))
+				}
+				p.has[r[0]] = struct{}{}
+			} else {
+				p.sorted = append(p.sorted, r)
+			}
+		}
+		sort.Slice(p.sorted, func(i, j int) bool {
+			return p.sorted[i][0] < p.sorted[j][0]
+		})
+	})
+	if _, ok := p.has[n]; ok {
+		return true
+	}
+	for ls := p.sorted; len(ls) > 0; {
+		i := len(ls) / 2
+		switch r := ls[i]; {
+		case n < r[0]:
+			ls = ls[:i] // search lower
+		case n >= r[1]:
+			ls = ls[i+1:] // search upper
+		default:
+			return true
+		}
+	}
+	return false
+}
+func (p *enumRanges) Format(s fmt.State, r rune)          { pfmt.FormatList(s, r, p) }
+func (p *enumRanges) ProtoInternal(pragma.DoNotImplement) {}
+
+type fieldRanges struct {
+	list   [][2]pref.FieldNumber // start inclusive; end exclusive
+	once   sync.Once
+	sorted [][2]pref.FieldNumber         // protected by once
+	has    map[pref.FieldNumber]struct{} // protected by once
+}
+
+func (p *fieldRanges) Len() int                      { return len(p.list) }
+func (p *fieldRanges) Get(i int) [2]pref.FieldNumber { return p.list[i] }
+func (p *fieldRanges) Has(n pref.FieldNumber) bool {
+	p.once.Do(func() {
+		for _, r := range p.list {
+			if r[0] == r[1]-1 {
+				if p.has == nil {
+					p.has = make(map[pref.FieldNumber]struct{}, len(p.list))
+				}
+				p.has[r[0]] = struct{}{}
+			} else {
+				p.sorted = append(p.sorted, r)
+			}
+		}
+		sort.Slice(p.sorted, func(i, j int) bool {
+			return p.sorted[i][0] < p.sorted[j][0]
+		})
+	})
+	if _, ok := p.has[n]; ok {
+		return true
+	}
+	for ls := p.sorted; len(ls) > 0; {
+		i := len(ls) / 2
+		switch r := ls[i]; {
+		case n < r[0]:
+			ls = ls[:i] // search lower
+		case n > r[1]:
+			ls = ls[i+1:] // search higher
+		default:
+			return true
+		}
+	}
+	return false
+}
+func (p *fieldRanges) Format(s fmt.State, r rune)          { pfmt.FormatList(s, r, p) }
+func (p *fieldRanges) ProtoInternal(pragma.DoNotImplement) {}
+
+type fieldNumbers struct {
+	list []pref.FieldNumber
+	once sync.Once
+	has  map[pref.FieldNumber]struct{} // protected by once
+}
+
+func (p *fieldNumbers) Len() int                   { return len(p.list) }
+func (p *fieldNumbers) Get(i int) pref.FieldNumber { return p.list[i] }
+func (p *fieldNumbers) Has(n pref.FieldNumber) bool {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.has = make(map[pref.FieldNumber]struct{}, len(p.list))
+			for _, n := range p.list {
+				p.has[n] = struct{}{}
+			}
+		}
+	})
+	_, ok := p.has[n]
+	return ok
+}
+func (p *fieldNumbers) Format(s fmt.State, r rune)          { pfmt.FormatList(s, r, p) }
+func (p *fieldNumbers) ProtoInternal(pragma.DoNotImplement) {}
+
+type oneofFields struct {
+	list   []pref.FieldDescriptor
+	once   sync.Once
+	byName map[pref.Name]pref.FieldDescriptor        // protected by once
+	byJSON map[string]pref.FieldDescriptor           // protected by once
+	byNum  map[pref.FieldNumber]pref.FieldDescriptor // protected by once
+}
+
+func (p *oneofFields) Len() int                                         { return len(p.list) }
+func (p *oneofFields) Get(i int) pref.FieldDescriptor                   { return p.list[i] }
+func (p *oneofFields) ByName(s pref.Name) pref.FieldDescriptor          { return p.lazyInit().byName[s] }
+func (p *oneofFields) ByJSONName(s string) pref.FieldDescriptor         { return p.lazyInit().byJSON[s] }
+func (p *oneofFields) ByNumber(n pref.FieldNumber) pref.FieldDescriptor { return p.lazyInit().byNum[n] }
+func (p *oneofFields) Format(s fmt.State, r rune)                       { pfmt.FormatList(s, r, p) }
+func (p *oneofFields) ProtoInternal(pragma.DoNotImplement)              {}
+
+func (p *oneofFields) lazyInit() *oneofFields {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[pref.Name]pref.FieldDescriptor, len(p.list))
+			p.byJSON = make(map[string]pref.FieldDescriptor, len(p.list))
+			p.byNum = make(map[pref.FieldNumber]pref.FieldDescriptor, len(p.list))
+			for _, f := range p.list {
+				// Field names and numbers are guaranteed to be unique.
+				p.byName[f.Name()] = f
+				p.byJSON[f.JSONName()] = f
+				p.byNum[f.Number()] = f
+			}
+		}
+	})
+	return p
+}
diff --git a/internal/fileinit/desc_list_gen.go b/internal/fileinit/desc_list_gen.go
new file mode 100644
index 0000000..5ec5663
--- /dev/null
+++ b/internal/fileinit/desc_list_gen.go
@@ -0,0 +1,345 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style.
+// license that can be found in the LICENSE file.
+
+// Code generated by generate-types. DO NOT EDIT.
+
+package fileinit
+
+import (
+	"fmt"
+	"sync"
+
+	"github.com/golang/protobuf/v2/internal/pragma"
+	"github.com/golang/protobuf/v2/internal/typefmt"
+	"github.com/golang/protobuf/v2/reflect/protoreflect"
+)
+
+type enumDescs struct {
+	list   []enumDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*enumDesc // protected by once
+}
+
+func (p *enumDescs) Len() int {
+	return len(p.list)
+}
+func (p *enumDescs) Get(i int) protoreflect.EnumDescriptor {
+	return &p.list[i]
+}
+func (p *enumDescs) ByName(s protoreflect.Name) protoreflect.EnumDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *enumDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *enumDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *enumDescs) lazyInit() *enumDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*enumDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+			}
+		}
+	})
+	return p
+}
+
+type enumValueDescs struct {
+	list   []enumValueDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*enumValueDesc       // protected by once
+	byNum  map[protoreflect.EnumNumber]*enumValueDesc // protected by once
+}
+
+func (p *enumValueDescs) Len() int {
+	return len(p.list)
+}
+func (p *enumValueDescs) Get(i int) protoreflect.EnumValueDescriptor {
+	return &p.list[i]
+}
+func (p *enumValueDescs) ByName(s protoreflect.Name) protoreflect.EnumValueDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *enumValueDescs) ByNumber(n protoreflect.EnumNumber) protoreflect.EnumValueDescriptor {
+	if d := p.lazyInit().byNum[n]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *enumValueDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *enumValueDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *enumValueDescs) lazyInit() *enumValueDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*enumValueDesc, len(p.list))
+			p.byNum = make(map[protoreflect.EnumNumber]*enumValueDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+				if _, ok := p.byNum[d.Number()]; !ok {
+					p.byNum[d.Number()] = d
+				}
+			}
+		}
+	})
+	return p
+}
+
+type messageDescs struct {
+	list   []messageDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*messageDesc // protected by once
+}
+
+func (p *messageDescs) Len() int {
+	return len(p.list)
+}
+func (p *messageDescs) Get(i int) protoreflect.MessageDescriptor {
+	return &p.list[i]
+}
+func (p *messageDescs) ByName(s protoreflect.Name) protoreflect.MessageDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *messageDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *messageDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *messageDescs) lazyInit() *messageDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*messageDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+			}
+		}
+	})
+	return p
+}
+
+type fieldDescs struct {
+	list   []fieldDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*fieldDesc        // protected by once
+	byJSON map[string]*fieldDesc                   // protected by once
+	byNum  map[protoreflect.FieldNumber]*fieldDesc // protected by once
+}
+
+func (p *fieldDescs) Len() int {
+	return len(p.list)
+}
+func (p *fieldDescs) Get(i int) protoreflect.FieldDescriptor {
+	return &p.list[i]
+}
+func (p *fieldDescs) ByName(s protoreflect.Name) protoreflect.FieldDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *fieldDescs) ByJSONName(s string) protoreflect.FieldDescriptor {
+	if d := p.lazyInit().byJSON[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *fieldDescs) ByNumber(n protoreflect.FieldNumber) protoreflect.FieldDescriptor {
+	if d := p.lazyInit().byNum[n]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *fieldDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *fieldDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *fieldDescs) lazyInit() *fieldDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*fieldDesc, len(p.list))
+			p.byJSON = make(map[string]*fieldDesc, len(p.list))
+			p.byNum = make(map[protoreflect.FieldNumber]*fieldDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+				if _, ok := p.byJSON[d.JSONName()]; !ok {
+					p.byJSON[d.JSONName()] = d
+				}
+				if _, ok := p.byNum[d.Number()]; !ok {
+					p.byNum[d.Number()] = d
+				}
+			}
+		}
+	})
+	return p
+}
+
+type oneofDescs struct {
+	list   []oneofDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*oneofDesc // protected by once
+}
+
+func (p *oneofDescs) Len() int {
+	return len(p.list)
+}
+func (p *oneofDescs) Get(i int) protoreflect.OneofDescriptor {
+	return &p.list[i]
+}
+func (p *oneofDescs) ByName(s protoreflect.Name) protoreflect.OneofDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *oneofDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *oneofDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *oneofDescs) lazyInit() *oneofDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*oneofDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+			}
+		}
+	})
+	return p
+}
+
+type extensionDescs struct {
+	list   []extensionDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*extensionDesc // protected by once
+}
+
+func (p *extensionDescs) Len() int {
+	return len(p.list)
+}
+func (p *extensionDescs) Get(i int) protoreflect.ExtensionDescriptor {
+	return &p.list[i]
+}
+func (p *extensionDescs) ByName(s protoreflect.Name) protoreflect.ExtensionDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *extensionDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *extensionDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *extensionDescs) lazyInit() *extensionDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*extensionDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+			}
+		}
+	})
+	return p
+}
+
+type serviceDescs struct {
+	list   []serviceDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*serviceDesc // protected by once
+}
+
+func (p *serviceDescs) Len() int {
+	return len(p.list)
+}
+func (p *serviceDescs) Get(i int) protoreflect.ServiceDescriptor {
+	return &p.list[i]
+}
+func (p *serviceDescs) ByName(s protoreflect.Name) protoreflect.ServiceDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *serviceDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *serviceDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *serviceDescs) lazyInit() *serviceDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*serviceDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+			}
+		}
+	})
+	return p
+}
+
+type methodDescs struct {
+	list   []methodDesc
+	once   sync.Once
+	byName map[protoreflect.Name]*methodDesc // protected by once
+}
+
+func (p *methodDescs) Len() int {
+	return len(p.list)
+}
+func (p *methodDescs) Get(i int) protoreflect.MethodDescriptor {
+	return &p.list[i]
+}
+func (p *methodDescs) ByName(s protoreflect.Name) protoreflect.MethodDescriptor {
+	if d := p.lazyInit().byName[s]; d != nil {
+		return d
+	}
+	return nil
+}
+func (p *methodDescs) Format(s fmt.State, r rune) {
+	typefmt.FormatList(s, r, p)
+}
+func (p *methodDescs) ProtoInternal(pragma.DoNotImplement) {}
+func (p *methodDescs) lazyInit() *methodDescs {
+	p.once.Do(func() {
+		if len(p.list) > 0 {
+			p.byName = make(map[protoreflect.Name]*methodDesc, len(p.list))
+			for i := range p.list {
+				d := &p.list[i]
+				if _, ok := p.byName[d.Name()]; !ok {
+					p.byName[d.Name()] = d
+				}
+			}
+		}
+	})
+	return p
+}
diff --git a/internal/fileinit/desc_wire.go b/internal/fileinit/desc_wire.go
new file mode 100644
index 0000000..b8b1684
--- /dev/null
+++ b/internal/fileinit/desc_wire.go
@@ -0,0 +1,94 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package fileinit
+
+// Constants for field numbers of messages declared in descriptor.proto.
+const (
+	// FileDescriptorProto field numbers
+	fileDesc_Syntax        = 12 // optional string
+	fileDesc_Name          = 1  // optional string
+	fileDesc_Package       = 2  // optional string
+	fileDesc_Imports       = 3  // repeated string
+	fileDesc_PublicImports = 10 // repeated int32
+	fileDesc_WeakImports   = 11 // repeated int32
+	fileDesc_Enums         = 5  // repeated EnumDescriptorProto
+	fileDesc_Messages      = 4  // repeated DescriptorProto
+	fileDesc_Extensions    = 7  // repeated FieldDescriptorProto
+	fileDesc_Services      = 6  // repeated ServiceDescriptorProto
+	fileDesc_Options       = 8  // optional FileOptions
+
+	// EnumDescriptorProto field numbers
+	enumDesc_Name           = 1 // optional string
+	enumDesc_Values         = 2 // repeated EnumValueDescriptorProto
+	enumDesc_ReservedNames  = 5 // repeated string
+	enumDesc_ReservedRanges = 4 // repeated EnumReservedRange
+	enumDesc_Options        = 3 // optional EnumOptions
+
+	// EnumReservedRange field numbers
+	enumReservedRange_Start = 1 // optional int32
+	enumReservedRange_End   = 2 // optional int32
+
+	// EnumValueDescriptorProto field numbers
+	enumValueDesc_Name    = 1 // optional string
+	enumValueDesc_Number  = 2 // optional int32
+	enumValueDesc_Options = 3 // optional EnumValueOptions
+
+	// DescriptorProto field numbers
+	messageDesc_Name            = 1  // optional string
+	messageDesc_Fields          = 2  // repeated FieldDescriptorProto
+	messageDesc_Oneofs          = 8  // repeated OneofDescriptorProto
+	messageDesc_ReservedNames   = 10 // repeated string
+	messageDesc_ReservedRanges  = 9  // repeated ReservedRange
+	messageDesc_ExtensionRanges = 5  // repeated ExtensionRange
+	messageDesc_Enums           = 4  // repeated EnumDescriptorProto
+	messageDesc_Messages        = 3  // repeated DescriptorProto
+	messageDesc_Extensions      = 6  // repeated FieldDescriptorProto
+	messageDesc_Options         = 7  // optional MessageOptions
+
+	// ReservedRange field numbers
+	messageReservedRange_Start = 1 // optional int32
+	messageReservedRange_End   = 2 // optional int32
+
+	// ExtensionRange field numbers
+	messageExtensionRange_Start   = 1 // optional int32
+	messageExtensionRange_End     = 2 // optional int32
+	messageExtensionRange_Options = 3 // optional ExtensionRangeOptions
+
+	// MessageOptions field numbers
+	messageOptions_IsMapEntry = 7 // optional bool
+
+	// FieldDescriptorProto field numbers
+	fieldDesc_Name         = 1  // optional string
+	fieldDesc_Number       = 3  // optional int32
+	fieldDesc_Cardinality  = 4  // optional Label
+	fieldDesc_Kind         = 5  // optional Type
+	fieldDesc_JSONName     = 10 // optional string
+	fieldDesc_Default      = 7  // optional string
+	fieldDesc_OneofIndex   = 9  // optional int32
+	fieldDesc_TypeName     = 6  // optional string
+	fieldDesc_ExtendedType = 2  // optional string
+	fieldDesc_Options      = 8  // optional FieldOptions
+
+	// FieldOptions field numbers
+	fieldOptions_IsPacked = 2  // optional bool
+	fieldOptions_IsWeak   = 10 // optional bool
+
+	// OneofDescriptorProto field numbers
+	oneofDesc_Name    = 1 // optional string
+	oneofDesc_Options = 2 // optional OneofOptions
+
+	// ServiceDescriptorProto field numbers
+	serviceDesc_Name    = 1 // optional string
+	serviceDesc_Methods = 2 // repeated MethodDescriptorProto
+	serviceDesc_Options = 3 // optional ServiceOptions
+
+	// MethodDescriptorProto field numbers
+	methodDesc_Name              = 1 // optional string
+	methodDesc_InputType         = 2 // optional string
+	methodDesc_OutputType        = 3 // optional string
+	methodDesc_IsStreamingClient = 5 // optional bool
+	methodDesc_IsStreamingServer = 6 // optional bool
+	methodDesc_Options           = 4 // optional MethodOptions
+)
diff --git a/internal/fileinit/name_pure.go b/internal/fileinit/name_pure.go
new file mode 100644
index 0000000..d03ffed
--- /dev/null
+++ b/internal/fileinit/name_pure.go
@@ -0,0 +1,46 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build purego appengine
+
+package fileinit
+
+import pref "github.com/golang/protobuf/v2/reflect/protoreflect"
+
+func getNameBuilder() *nameBuilder { return nil }
+func putNameBuilder(*nameBuilder)  {}
+
+type nameBuilder struct{}
+
+// AppendFullName is equivalent to protoreflect.FullName.Append.
+func (*nameBuilder) AppendFullName(prefix pref.FullName, name []byte) fullName {
+	return fullName{
+		shortLen: len(name),
+		fullName: prefix.Append(pref.Name(name)),
+	}
+}
+
+// MakeString is equivalent to string(b), but optimized for large batches
+// with a shared lifetime.
+func (*nameBuilder) MakeString(b []byte) string {
+	return string(b)
+}
+
+// MakeJSONName creates a JSON name from the protobuf short name.
+func (*nameBuilder) MakeJSONName(s pref.Name) string {
+	var b []byte
+	var wasUnderscore bool
+	for i := 0; i < len(s); i++ { // proto identifiers are always ASCII
+		c := s[i]
+		if c != '_' {
+			isLower := 'a' <= c && c <= 'z'
+			if wasUnderscore && isLower {
+				c -= 'a' - 'A'
+			}
+			b = append(b, c)
+		}
+		wasUnderscore = c == '_'
+	}
+	return string(b)
+}
diff --git a/internal/fileinit/name_unsafe.go b/internal/fileinit/name_unsafe.go
new file mode 100644
index 0000000..2baca9b
--- /dev/null
+++ b/internal/fileinit/name_unsafe.go
@@ -0,0 +1,138 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !purego,!appengine
+
+package fileinit
+
+import (
+	"sync"
+	"unsafe"
+
+	pref "github.com/golang/protobuf/v2/reflect/protoreflect"
+)
+
+var nameBuilderPool = sync.Pool{
+	New: func() interface{} { return new(nameBuilder) },
+}
+
+func getNameBuilder() *nameBuilder {
+	return nameBuilderPool.Get().(*nameBuilder)
+}
+func putNameBuilder(b *nameBuilder) {
+	nameBuilderPool.Put(b)
+}
+
+type nameBuilder struct {
+	sb stringBuilder
+}
+
+// AppendFullName is equivalent to protoreflect.FullName.Append,
+// but optimized for large batches where each name has a shared lifetime.
+func (nb *nameBuilder) AppendFullName(prefix pref.FullName, name []byte) fullName {
+	n := len(prefix) + len(".") + len(name)
+	if len(prefix) == 0 {
+		n -= len(".")
+	}
+	nb.grow(n)
+	nb.sb.WriteString(string(prefix))
+	nb.sb.WriteByte('.')
+	nb.sb.Write(name)
+	return fullName{
+		shortLen: len(name),
+		fullName: pref.FullName(nb.last(n)),
+	}
+}
+
+// MakeString is equivalent to string(b), but optimized for large batches
+// with a shared lifetime.
+func (nb *nameBuilder) MakeString(b []byte) string {
+	nb.grow(len(b))
+	nb.sb.Write(b)
+	return nb.last(len(b))
+}
+
+// MakeJSONName creates a JSON name from the protobuf short name.
+func (nb *nameBuilder) MakeJSONName(s pref.Name) string {
+	nb.grow(len(s))
+	var n int
+	var wasUnderscore bool
+	for i := 0; i < len(s); i++ { // proto identifiers are always ASCII
+		c := s[i]
+		if c != '_' {
+			isLower := 'a' <= c && c <= 'z'
+			if wasUnderscore && isLower {
+				c -= 'a' - 'A'
+			}
+			nb.sb.WriteByte(c)
+			n++
+		}
+		wasUnderscore = c == '_'
+	}
+	return nb.last(n)
+}
+
+func (nb *nameBuilder) last(n int) string {
+	s := nb.sb.String()
+	return s[len(s)-n:]
+}
+
+func (nb *nameBuilder) grow(n int) {
+	const batchSize = 1 << 16
+	if nb.sb.Cap()-nb.sb.Len() < n {
+		nb.sb.Reset()
+		nb.sb.Grow(batchSize)
+	}
+}
+
+// stringsBuilder is a simplified copy of the strings.Builder from Go1.12:
+//	* removed the shallow copy check
+//	* removed methods that we do not use (e.g. WriteRune)
+//
+// A forked version is used:
+//	* to enable Go1.9 support, but strings.Builder was added in Go1.10
+//	* for the Cap method, which was missing until Go1.12
+//
+// TODO: Remove this when Go1.12 is the minimally supported toolchain version.
+type stringBuilder struct {
+	buf []byte
+}
+
+func (b *stringBuilder) String() string {
+	return *(*string)(unsafe.Pointer(&b.buf))
+}
+func (b *stringBuilder) Len() int {
+	return len(b.buf)
+}
+func (b *stringBuilder) Cap() int {
+	return cap(b.buf)
+}
+func (b *stringBuilder) Reset() {
+	b.buf = nil
+}
+func (b *stringBuilder) grow(n int) {
+	buf := make([]byte, len(b.buf), 2*cap(b.buf)+n)
+	copy(buf, b.buf)
+	b.buf = buf
+}
+func (b *stringBuilder) Grow(n int) {
+	if n < 0 {
+		panic("stringBuilder.Grow: negative count")
+	}
+	if cap(b.buf)-len(b.buf) < n {
+		b.grow(n)
+	}
+}
+func (b *stringBuilder) Write(p []byte) (int, error) {
+	b.buf = append(b.buf, p...)
+	return len(p), nil
+}
+func (b *stringBuilder) WriteByte(c byte) error {
+	b.buf = append(b.buf, c)
+	return nil
+}
+func (b *stringBuilder) WriteString(s string) (int, error) {
+	b.buf = append(b.buf, s...)
+	return len(s), nil
+}