umbraco6 - Umbraco V6.1.3 Lucene Index Corruption -
just upgraded umbraco v6.1.1 site v6.1.3. went on workstation. copied files web server after deleting there, did same database. set directory permissions , ran site. site (which mvc) runs 2 issues can't fathom , appreciate with.
one page errors read past eof error. view it's trying run. error on link in bold.
@inherits umbraco.web.mvc.umbracotemplatepage @{ layout = "basepage.cshtml"; } <div class="row-fluid"> <div class="span12"> <h1>@umbraco.field("pagename")</h1> @umbraco.field("pagetext") </div> </div> <div class="row-fluid"> <div class="span12"> @foreach (var page in model.content.children) { <section class="well"> <h3>@page.name</h3> @if(page.children.count() > 0) { <ul> **@foreach (var pub in page.children) {** <li><a href="@umbraco.media(pub.getpropertyvalue("publication")).url" title="@pub.name" target="_blank">@pub.name</a></li> } </ul> } </section> } </div> </div>
the stack trace is
[ioexception: read past eof] lucene.net.index.findsegmentsfile.run(indexcommit commit) +2040 lucene.net.index.directoryreader.open(directory directory, indexdeletionpolicy deletionpolicy, indexcommit commit, boolean readonly, int32 terminfosindexdivisor) +57 lucene.net.search.indexsearcher..ctor(directory path, boolean readonly) +29 examine.luceneengine.providers.lucenesearcher.validatesearcher(boolean forcereopen) +136
the other (i think related issue) in cms, when opening developer section, javascript alert huge error message in relating lucene
error: {"message":"an error has occurred.","exceptionmessage":"the 'objectcontent`1' type failed serialize response body content type 'application/json; charset=utf-8'.","exceptiontype":"system.invalidoperationexception","stacktrace":null,"innerexception":{"message":"an error has occurred.","exceptionmessage":"could not create index searcher supplied lucene directory","exceptiontype":"system.applicationexception","stacktrace":" @ examine.luceneengine.providers.lucenesearcher.validatesearcher(boolean forcereopen)\r\n @ examine.luceneengine.providers.lucenesearcher.getsearcher()\r\n @ umbraco.web.search.examineextensions.getindexreaderforsearcher(baselucenesearcher searcher)\r\n @ umbraco.web.search.examineextensions.getindexdocumentcount(luceneindexer indexer)\r\n @ umbraco.web.webservices.examinemanagementapicontroller.createmodel(baseindexprovider indexer)\r\n @ system.linq.enumerable.whereselectenumerableiterator`2.movenext()\r\n @ system.collections.generic.list`1..ctor(ienumerable`1 collection)\r\n @ system.linq.enumerable.tolist[tsource](ienumerable`1 source)\r\n @ newtonsoft.json.serialization.jsonarraycontract.createwrapper(object list)\r\n @ newtonsoft.json.serialization.jsonserializerinternalwriter.serializevalue(jsonwriter writer, object value, jsoncontract valuecontract, jsonproperty member, jsoncontainercontract containercontract, jsonproperty containerproperty)\r\n @ newtonsoft.json.serialization.jsonserializerinternalwriter.serialize(jsonwriter jsonwriter, object value)\r\n @ newtonsoft.json.jsonserializer.serializeinternal(jsonwriter jsonwriter, object value)\r\n @ system.net.http.formatting.jsonmediatypeformatter.<>c__displayclassd.<writetostreamasync>b__c()\r\n @ system.threading.tasks.taskhelpers.runsynchronously(action action, cancellationtoken token)","innerexception":{"message":"an error has occurred.","exceptionmessage":"read past eof","exceptiontype":"system.io.ioexception","stacktrace":" @ lucene.net.index.segmentinfos.findsegmentsfile.run(indexcommit commit)\r\n @ lucene.net.index.directoryreader.open(directory directory, indexdeletionpolicy deletionpolicy, indexcommit commit, boolean readonly, int32 terminfosindexdivisor)\r\n @ lucene.net.search.indexsearcher..ctor(directory path, boolean readonly)\r\n @ examine.luceneengine.providers.lucenesearcher.validatesearcher(boolean forcereopen)"}}}
have tried umbraco forum have had no replies. if it's no-brainer, still need know of course.
any advice appreciated.
i'd indexes (they in app_data\temp\....), , delete them , restart app pool.
umbraco rebuild them (can take 5-10 mins huge database - 150k nodes) on next start up.
it might corrupt index (in case, grab backup , use luke on it, see if broken), or possibly version of index changed - same result.
Comments
Post a Comment