All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "threa... See more...
I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "thread": "main", "level": "ERROR", "message": "Error while creating user group", "thrown": { "commonElementCount": 0, "extendedStackTrace": "403 Forbidden:" }, "endOfBatch": false, "threadId": 1, "threadPriority": 5, "timestamp": "2024-02-15T06:28:01.072+0000" } Now, when i ran following query -> index=* source="/somesource/*" message "403" | search level IN (ERROR) | eval Test=substr(message,1,5) | eval Test1=substr(thrown.extendedStackTrace, 1, 3) | table Test, Test1 I am getting value for Test. Correct substring occuring (Output is Error). But for Test1, its empty string, where as I am expecting 403. As message is on root, its working, but the extendedStackTrace is under thrown, the thrown.extendedStackTrace is not rending the correct result. Although, if i do ...| table Test, Test1, thrown.extendedStackTrace There is a proper value coming in for thrown.extendedStackTrace What am i missing?
Hi @HPACHPANDE , you have to save the results of your search in a lookup using the outputlookup command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Outputlookup). the fields... See more...
Hi @HPACHPANDE , you have to save the results of your search in a lookup using the outputlookup command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Outputlookup). the fields to save in the lookup depends on your search. Ciao. Giuseppe
Tested example for one-shot uses to rebuild all thawed buckets for a single index, which also includes the index name as required by newer versions:   ls -dA /data/idx/hot/splunk/{index-name}/thawe... See more...
Tested example for one-shot uses to rebuild all thawed buckets for a single index, which also includes the index name as required by newer versions:   ls -dA /data/idx/hot/splunk/{index-name}/thaweddb/db_* | xargs -I BUCKET --max-procs=10 /opt/splunk/bin/splunk rebuild BUCKET {index-name}  
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup tabl... See more...
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup table from the notable triggered or contributing events of the notable.   Requirement here is that need to create co relation rule from the lookup values which will be taking from previously triggered notables.
The emulation works wonderfully when doing it my test environment, however when doing the emulation in the search head, the "INTERESTING FIELDS" field names and their values are overriding the extrac... See more...
The emulation works wonderfully when doing it my test environment, however when doing the emulation in the search head, the "INTERESTING FIELDS" field names and their values are overriding the extracted values: host Component Value _time   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   So I had to remove the auto-extracted field at the beginning Here is the final emulation in live data: | fields - Section_5 | dedup host | eval data = split(_raw, " ") | eval data = mvfilter(match(data, "^Component=")) | mvexpand data | rename data AS _raw | extract pairdelim=",", kvdelim="=" | rename Section_5 AS Value | table host Component Value _time   Thank you so much for your help!  
Due to some issue with proper cleanup of idle processes, number of python process ( appserver.py) running on the system constantly grow. Thus due to  systemwide memory growth,  these stale processes,... See more...
Due to some issue with proper cleanup of idle processes, number of python process ( appserver.py) running on the system constantly grow. Thus due to  systemwide memory growth,  these stale processes, eventually causes an OOM. Run following search to find if any search head is impacted by this issue and what % of total system memory these stale processes running more than 24 hours. If these processes using more than 15% of total system memory, then run script to kill stales processes.   index=_introspection host=<all search heads> appserver.py data.elapsed > 86400 | dedup host, data.pid | stats dc(data.pid) as cnt sum("data.pct_memory") AS appserver_memory_used by host | sort - appserver_memory_used   On linux/unix you can use following script to kill stale processes and reclaim memory.   kill -TERM $(ps -eo etimes,pid,cmd | awk '{if ( $1 >= 86400) print $2 " " $4 }' |grep appserver.py | awk '{print $1}')  
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the... See more...
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the issue starts slowly creeping up to happen again.
Were you able to find a way to resolve this issue? We're seeing the same thing, complete with the same error message in log.log.  For future users, the way to get around SSO if the setup fails is ... See more...
Were you able to find a way to resolve this issue? We're seeing the same thing, complete with the same error message in log.log.  For future users, the way to get around SSO if the setup fails is to append ?loginType=uba to the end of your login (https://example.com/?loginType=uba)
It would help to know what you've already tried.  Have you looked at it the other way, that is, what sources do each DM use? | tstats count from datamodel=<DM name> by source You  should be able to... See more...
It would help to know what you've already tried.  Have you looked at it the other way, that is, what sources do each DM use? | tstats count from datamodel=<DM name> by source You  should be able to combine that with | rest /services/data/models and map to list the sources used by all datamodels.
Hi just send all events to SCP as in any UF. If you need also dbxquery on SCP then you must install DB Connect and needed dbdrivers also into it and of course open FW from SCP search head(s). r. I... See more...
Hi just send all events to SCP as in any UF. If you need also dbxquery on SCP then you must install DB Connect and needed dbdrivers also into it and of course open FW from SCP search head(s). r. Ismo
Multi-line explains why default Component and Section_5 do not contain all data.  Do not worry about props.conf, then.  This is what you can do:   | sort host _time | eval data = split(_raw, " ") |... See more...
Multi-line explains why default Component and Section_5 do not contain all data.  Do not worry about props.conf, then.  This is what you can do:   | sort host _time | eval data = split(_raw, " ") | eval data = mvfilter(match(data, "^Component=")) | mvexpand data | rename data AS _raw | extract | rename Section_5 AS Value | table host Component Value _time   This is an emulation you can play with and compare with real data   | makeresults | eval _raw="TimeStamp Component=F_Type_1,.....,Section_5=F_Type_1_Section_5_Value Component=F_Type_2,.....,Section_5=F_Type_2_Section_5_Value Component=F_Type_3,.....,Section_5=F_Type_3_Section_5_Value" ``` data emulation above ```   The output is then host Component Value _time   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   F_Type_2 F_Type_2_Section_5_Value 2024-02-14 21:28:25   F_Type_3 F_Type_3_Section_5_Value 2024-02-14 21:28:25 Hope this helps
Are there any messages in the forwarder's splunkd.log that might explain what is happening?  Look for "DC:" in the log.
thank you so much, 
Hi @Geoff.Wild, Thanks for following up with the solution! 
@Ryan.Paredez you were close. Here's the final solution: Install a second DBAgent on the connector server using the latest from AppD: Download db-agent-24.1.0.3704.zip mkdir /opt/appdynamic... See more...
@Ryan.Paredez you were close. Here's the final solution: Install a second DBAgent on the connector server using the latest from AppD: Download db-agent-24.1.0.3704.zip mkdir /opt/appdynamics/dbagent1 Unzip the agent there. Replace the lib/mysql-connector-java-8.0.27.jar  with mysql-connector-java-commercial-5.1.13.jar (this one works with older MySQL DB's) Change the filename to:  mysql-connector-java-8.0.27.jar Make sure the agent has a different name than the current one by updating the startup options in the start-dbagent script like so: exec "$JAVACMD" "${JVM_OPTS[@]}" -Ddbagent.name="OldMysql-DB-Agent1" -jar "${APP_HOME}/db-agent.jar" Update the conf/controller-info.xml as needed. Start the agent. In AppD GUI, update the collector to use the  new agent, OldMysql-DB-Agent1 (or what ever you named it).
There is no API that will provide every event Splunk receives.  Splunk does not want to make it easy to transition to a different product.  To use the API, you'll have to run a search (perhaps a real... See more...
There is no API that will provide every event Splunk receives.  Splunk does not want to make it easy to transition to a different product.  To use the API, you'll have to run a search (perhaps a real-time search) and collect the events from the search results. Depending on the other tool, you may be able to use Ingest Actions to fork the data to S3 where the other tool may be able to pick them up.
It appears no props.conf has been created, I'll talk more with the Admin. As for the Raw Data, It's Single Multi-line event: TimeStamp Component=F_Type_1,.....,Section_5=F_Type_1_Section_5_Valu... See more...
It appears no props.conf has been created, I'll talk more with the Admin. As for the Raw Data, It's Single Multi-line event: TimeStamp Component=F_Type_1,.....,Section_5=F_Type_1_Section_5_Value Component=F_Type_2,.....,Section_5=F_Type_2_Section_5_Value Component=F_Type_3,.....,Section_5=F_Type_3_Section_5_Value But in the emulation is to ignore that TimeStamp: | makeresults | eval data=split("Component=F_Type_1,.....,Section_5=F_Type_1_Section_5_Value Component=F_Type_2,.....,Section_5=F_Type_2_Section_5_Value Component=F_Type_3,.....,Section_5=F_Type_3_Section_5_Value", " ") | mvexpand data | rename data AS _raw ``` emulation assuming Splunk "forgets" to extract ```  
Hello, yuanliu. Thank you for reaching out. While I agree that the excerpt that I posted is indeed JSON, the full _raw has much more text, and a lot of cleanup would be necessary before spath could ... See more...
Hello, yuanliu. Thank you for reaching out. While I agree that the excerpt that I posted is indeed JSON, the full _raw has much more text, and a lot of cleanup would be necessary before spath could be useful. Considering my limited experience with SPLUNK at this point, it would be much more difficult to figure out what errors are caused by my shortcoming and what is caused by the need to prep _raw for spath to work its magic.
You do not need to remove timestamp per se.  Just let us know whether the mock data is a single, multi-line event (emulation 2) or multiple events (emulation 1)
Even so, your code will be more robust and much more maintainable if you don't treat JSON data as text.  The mock data looks too much like an excerpt from compliant JSON, but part of the object conta... See more...
Even so, your code will be more robust and much more maintainable if you don't treat JSON data as text.  The mock data looks too much like an excerpt from compliant JSON, but part of the object contains embedded escaped JSON string, hence you want some special handling. If you cab post complete mock data with the original structure, you will see that there is nothing that Splunk's QA tested spath command cannot handle.