All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @alexeysharkov , i found a similar behavior that changed putting span after the command: | timechart span=5m count BY log.bankCode Ciao. Giuseppe
Hello  I have xml messages in search. row like this       <log><local_time>2025-02-25T15:02:59:955059+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_3110449968.pdf</fileName><size>555468</s... See more...
Hello  I have xml messages in search. row like this       <log><local_time>2025-02-25T15:02:59:955059+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_3110449968.pdf</fileName><size>555468</size><iin>800716350670</iin><agrementNumber>3110449968</agrementNumber><agrementDate>08.11.2011</agrementDate><referenceId>HKBRZA0000388473</referenceId><bankCode>ALTYNDBZ</bankCode><result>OK</result></log> <log><local_time>2025-02-25T15:02:59:885557+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_dbz.pdf</fileName><size>152868</size><iin>840625302683</iin><agrementNumber>4301961740</agrementNumber><agrementDate>21.06.2023</agrementDate><referenceId>HKBRZA0000388476</referenceId><bankCode>ALTYNDBZ</bankCode><result>OK</result></log>        I see after search in field in '_time' and 'log.local_time' date time with seconds and parts. Seems to be OK  But when i try build timechart i see next Seems to be timechart don't know about minutes and seconds. And know only hours. My span=5m is ignored. For me it is ok using _time or log.local_time   I try various method parse with strptime but useless thanks        
Hello Splunkers!! I am writing to bring to your attention a critical issue we are experiencing following our recent migration of Splunk from version 8.1.1 to 9.1.1. During our routine operati... See more...
Hello Splunkers!! I am writing to bring to your attention a critical issue we are experiencing following our recent migration of Splunk from version 8.1.1 to 9.1.1. During our routine operations, specifically while attempting to schedule reports from the dashboard using the noop command, we have encountered a "FATAL" error with the message indicating a "bad allocation." Server reported HTTP status=400 while getting mode=resultsb'\n\n \n bad allocation\n \n\n Please help me get it fix.
Hello recently I moved ES app from one sh to another non clustered sh . after that this error is coming Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel 
Thanks for your help, really appreciated!   As per the below Screenshot: Convert real-time searches into scheduled searches. is real time = Ad-hoc? Could you please assist in differentiat... See more...
Thanks for your help, really appreciated!   As per the below Screenshot: Convert real-time searches into scheduled searches. is real time = Ad-hoc? Could you please assist in differentiate the difference between the (Historical - Realtime - Summarization - Ad-hoc) Searches?
Hi all, just to notify you that this issue has been deeply troubleshooted with customer support and finally the fix should be included in the future release of Splunk 9.4.2.   Regards
Could you please tell me how to restore admin?
Hi @chenfan , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the... See more...
Hi @chenfan , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Here is my final configuration to resolve this issue. I hope it assists anyone experiencing a similar problem in finding a solution. UF --> $SPLUNK/etc/system/local/server.conf [queue=parsingQueu... See more...
Here is my final configuration to resolve this issue. I hope it assists anyone experiencing a similar problem in finding a solution. UF --> $SPLUNK/etc/system/local/server.conf [queue=parsingQueue] maxSize = 20MB [general] parallelIngestionPipelines = 4
We have an index named ABC with a searchable retention period of 180 days and an archival period of 3 years. I would like to transfer all logs to AWS S3, as they are currently stored in Splunk Archiv... See more...
We have an index named ABC with a searchable retention period of 180 days and an archival period of 3 years. I would like to transfer all logs to AWS S3, as they are currently stored in Splunk Archive storage. Could you please advise on how to accomplish this? Additionally, will this process include moving both searchable logs and archived logs to S3? I would appreciate a step-by-step guide. If anyone has knowledge of this process, I would be grateful for your assistance. Thank you.
@SN1  ES does require that the "admin" account exist. By default, saved searches use "dispatchAs" setting of "owner". The owner of the searches is set to "admin" via default.meta. Thus, removing th... See more...
@SN1  ES does require that the "admin" account exist. By default, saved searches use "dispatchAs" setting of "owner". The owner of the searches is set to "admin" via default.meta. Thus, removing the admin user will cause searches to fail. If the admin user to removed, then the following error will be observed when searches are executed that attempt to run under the admin user: 'DispatchManager': The user 'admin' does not have sufficient search privileges. To fix this issue, restore the admin user. The searches should begin working immediately (no restart required).
Recently I migrated ES from one SH to another non cluther SH . this error was popping in the panel of ES app Error in 'DispatchManager': The user 'admin' does not have sufficient search privileges... See more...
Recently I migrated ES from one SH to another non cluther SH . this error was popping in the panel of ES app Error in 'DispatchManager': The user 'admin' does not have sufficient search privileges. So to resolve this i searched about this error and there was a solution to remove owner=admin from default.meta file . It worked for some panels but some panels still show this error.
Name, Description, and Location are all multi value fields that directly corresponds to each other.  Here is the sample for one of the hosts:     Name Description Location ... See more...
Name, Description, and Location are all multi value fields that directly corresponds to each other.  Here is the sample for one of the hosts:     Name Description Location name1 description1 location1 name2 description2 location2 name3 description3 location3 name4 description4 location4     Can you explain how is this sample for ONE of the hosts?  Does the above represent one field with five lines, the first line being "Name Description Location"?  Or do you mean to say a sample for one of the hosts looks like Name Description Location Name1 Name2 Name3 Name4 description1 description2 description3 description4 location1 location2 location3 location4 Or something totally different? Also, your SPL snippet doesn't show the mvexpand command that causes the memory error.  How are you using mvexpand? Additionally, what is the expected output from the sample, after you clarify how the sample actually look like?
Second what @ITWhisperer says.  If the raw event is not completely in JSON, the event must have included a JSON message.  In that case, Splunk would not have extracted JSON fields.  But it is strongl... See more...
Second what @ITWhisperer says.  If the raw event is not completely in JSON, the event must have included a JSON message.  In that case, Splunk would not have extracted JSON fields.  But it is strongly recommended that you treat structured data as structured data and do not use regex to extract from them.   The way to do this is to extract the JSON part into its own field so you can make structured extraction.  Please post sample of complete event.
Thanks!
I'll answer to my post/question, could be helpful for someone in similar situation. The issue was caused by 3 not existing roles which were still present in authorize.conf. Disabled those roles in a... See more...
I'll answer to my post/question, could be helpful for someone in similar situation. The issue was caused by 3 not existing roles which were still present in authorize.conf. Disabled those roles in authorize.conf and the update worked.
You're trying to run too many searches.  There are several things you can do about that. Convert real-time searches into scheduled searches. Disable the searches you don't need. Run the remaining... See more...
You're trying to run too many searches.  There are several things you can do about that. Convert real-time searches into scheduled searches. Disable the searches you don't need. Run the remaining searches less often. Re-schedule searches so they are evenly distributed around the clock.  Use the Extended Search Reporting dashboard to find the busy and not-so-busy times. Add more CPUs to your search head(s).  You also may need to add CPUs to the indexers.  Do this only after performing all of the above tasks.
Hi, try to enclose message=*Unit state update from cook client target* with double quote like this message="*Unit state update from cook client target*". I think the problem is white space between... See more...
Hi, try to enclose message=*Unit state update from cook client target* with double quote like this message="*Unit state update from cook client target*". I think the problem is white space between unit, state, ...,  I hope ti help
This looks like json - has the event been ingested as json and the message field already been extracted?
Try something like this | eval combined=mvzip(name,mvzip(location,description,"|"),"|") | stats count by combined | eval name=mvindex(split(combined,"|"),0) | eval location=mvindex(split(combined,"|... See more...
Try something like this | eval combined=mvzip(name,mvzip(location,description,"|"),"|") | stats count by combined | eval name=mvindex(split(combined,"|"),0) | eval location=mvindex(split(combined,"|"),1) | eval description=mvindex(split(combined,"|"),2)