All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @RemyaT, let me understand: do you want to count only events with response_code=403 or cout of all response_codes when there's at least one 403? If the first, you can try: index=sample_index pa... See more...
Hi @RemyaT, let me understand: do you want to count only events with response_code=403 or cout of all response_codes when there's at least one 403? If the first, you can try: index=sample_index path=*/sample_path* response_code=403 | timechart span=1m count if the second index=sample_index path=*/sample_path* | bucket _time span=1m | stats count(eval(response_code="200")) AS 200_count count(eval(response_code="403")) AS 403_count BY _time | where 403_count >0 Ciao. Giuseppe
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time respons... See more...
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time responseCode   The result shows the response code and count vs time for each minute. But I just need the events in those 1 minutes which have 403 response code along with other response codes and skip which doesn't have 403.  Suppose during time1, if there are only events with response code 200, I don't need that in my result. But during time2, if there are events with response code 200 and 403, I need that in the result as time, response code, count. 
Hi @djoobbani .. please check this SPL.. thanks.  source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(poste... See more...
Hi @djoobbani .. please check this SPL.. thanks.  source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | stats earliest(r2_posted_timestamp) AS Earliest_r2_posted_timestamp, latest(r2_posted_timestamp) AS Latest_r2_posted_timestamp | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds Earliest_r2_posted_timestamp Latest_r2_posted_timestamp  
Nice SPL @ITWhisperer ..  Hi @Kirthika .. pls check this SPL.. (the stats logic may needs to be fine-tuned)   source="testlogrex.txt" host="laptop" sourcetype="nov12" | rex field=_raw "\|(?<msg>.+... See more...
Nice SPL @ITWhisperer ..  Hi @Kirthika .. pls check this SPL.. (the stats logic may needs to be fine-tuned)   source="testlogrex.txt" host="laptop" sourcetype="nov12" | rex field=_raw "\|(?<msg>.+)$" | stats sum(eval(case(msg=="**Starting**",1,msg=="Shutting down",-1))) as bad count(eval(case(msg=="**Starting**",1))) as starts | eval good=starts-bad   this SPL gives this result..  bad starts good 5 7 2 The Sample logs and rex used here: source="testlogrex.txt" host="laptop" sourcetype="nov12" | rex field=_raw "\|(?<msg>.+)$" | table _raw msg _raw msg 2022-08-19 08:10:04.6218|Shutting down Shutting down 2022-08-19 08:10:03.6061|dd03 dd03 2022-08-19 08:10:02.5905|fff fff 2022-08-19 08:10:01.0593|**Starting** **Starting** 2022-08-19 08:10:08.6843|**Starting** **Starting** 2022-08-19 08:10:07.6686|ddd07 ddd07 2022-08-19 08:10:06.6374|fffff06 fffff06 2022-08-19 08:10:05.6218|**Starting** **Starting** 2022-08-19 08:10:12.5905|fff12 fff12 2022-08-19 08:10:11.0593|**Starting** **Starting** 2022-08-19 08:10:10.1530|vv10 vv10 2022-08-19 08:10:09.1530|aa09 aa09 2022-08-19 08:10:16.6374|fffff16 fffff16 2022-08-19 08:10:15.6218|**Starting** **Starting** 2022-08-19 08:10:14.6218|Shutting down Shutting down 2022-08-19 08:10:13.6061|**Starting** **Starting** 2022-08-19 08:10:19.15|aa19 aa19 2022-08-19 08:10:18.6843|**Starting** **Starting** 2022-08-19 08:10:17.6686|ddd17 ddd17 2022-08-19 08:10:20.160|vv20 vv20
Hi @joe06031990  Usally what I usally follow is I will enable maintenance mode on cluster manager and  once activity done on indexers and up and running, will disbale maintenance mode. then all b... See more...
Hi @joe06031990  Usally what I usally follow is I will enable maintenance mode on cluster manager and  once activity done on indexers and up and running, will disbale maintenance mode. then all bucket fixup activitys will complete. some use maintenance mode and splunk offline together. 
@SanjayReddy Thanks for your response, I just mentioned the log format. Actually the log file is recent, new file will be generated everyday filename.<date> I updated my post as well. 
Hi @iamsplunker  from inputs.conf and log file last modified, there is an issue I see  as log file modified last month and in inputs.conf you mentioned ignoreOlderThan = 7d  Splunk will igno... See more...
Hi @iamsplunker  from inputs.conf and log file last modified, there is an issue I see  as log file modified last month and in inputs.conf you mentioned ignoreOlderThan = 7d  Splunk will ignore log files which are modified more than 7 days ago. I would suggest comment ignoreOlderThan = 7d  for first time and restart splunkd ,  once splunk reads older file then you can comment again.
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splun... See more...
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splunk offline on 3 indexers at a time before they are shut down?   Thanks
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timesta... See more...
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds The above query can return multiple events with the same event_id & process_id with different posted_ timestamp I need to only return the one with the earliest/oldest posted_time(one of the fields in the event). How can i change the above query to accomplish this? Thanks!
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [moni... See more...
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [monitor:///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.*] sourcetype = banana _meta=Appid::APP-1234 DataClassification::Unclassified index = test disabled = 0 crcSalt = <SOURCE> ignoreOlderThan = 7d The host(s) are sending _internal logs to Splunk, Here is the info I see in splunkd.log no errors, I tried the wildcard (*) in the monitoring stanza at the end after /output dir however it didn't work TailingProcessor [ MainTailingThread] - Parsing configuration stanza: monitor :///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.* Actual log file  -rw-r--r--1 automat autouser 6184 Oct 8 00:00 event_circuit.ABD.11082023      
Load balancers help the SHC perform better by evenly distributing users among all nodes.
thanks, for your support After i reinstalled UF it's started work.. my issue has been fixed  
Hi Cansel, A Spring boot application in Kubernetes pod is brought up with javaagent of appdynamics to track the metrics. The base image of container has Azul Zulu Java 1.8 and we need compatible ve... See more...
Hi Cansel, A Spring boot application in Kubernetes pod is brought up with javaagent of appdynamics to track the metrics. The base image of container has Azul Zulu Java 1.8 and we need compatible version of appdynamics javaagent.jar to get it working. We have license and have a SaaS controller setup to track the events. Thanks, Shwetha
Hi @spy_jr, Splunk may fail to process your _raw JSON as-is. There is an extra comma after the name string value of each entityValue object, e.g. after "name": "SERVER01": { "entityValue": { ... See more...
Hi @spy_jr, Splunk may fail to process your _raw JSON as-is. There is an extra comma after the name string value of each entityValue object, e.g. after "name": "SERVER01": { "entityValue": { "name": "SERVER01", }, "relatedIndicators": [ 2 ] }, After correcting that, you can extract the values you need using JSON and multivalue eval functions: | spath | eval impactScope_mv=json_array_to_mv(json_extract(_raw, "detail.impactScope{}"), false()) | eval relatedIndicators_mvcount=mvmap(impactScope_mv, mvcount(json_array_to_mv(json(json_extract(impactScope_mv, "relatedIndicators")), false()))) | eval relatedIndicators_mvindex=mvfind(relatedIndicators_mvcount, max(relatedIndicators_mvcount)) | eval impactScope_name=json_extract(mvindex(impactScope_mv, relatedIndicators_mvindex), "entityValue.name") | table workbenchId workbenchName severity impactScope_name Note that ties--two or more arrays of equal length--will return the first entry. In a nutshell, we: 1. Convert the detail.impactScope{} array in a multivalued field. 2. For each entry of the impactScope array, convert the relatedIndicators array to a multivalued field and store the count of entries in a new multivalued field. 3. Find the index of the largest value (the longest array). 4. Extract the entityValue.name value from the impactScope_mv multivalued field using the index identified in step 3. 5. Display a table using the fields extracted by spath and the field created in step 4.
Hi @gcusello, You can modify the serverName field in $SPLUNK_HOME/etc/apps/splunk_monitoring_console/lookups/assets.csv; however, the value will be set back to the real instance name whenever Monito... See more...
Hi @gcusello, You can modify the serverName field in $SPLUNK_HOME/etc/apps/splunk_monitoring_console/lookups/assets.csv; however, the value will be set back to the real instance name whenever Monitoring Console General Setup changes are saved or the DMC Asset - * searches are executed.
Hi @AL3Z, The cause depends on the output of save_image_and_icon_on_install.py. Try this search: index=_internal sourcetype=splunkd save_image_and_icon_on_install.py Possible outcomes in Splunk E... See more...
Hi @AL3Z, The cause depends on the output of save_image_and_icon_on_install.py. Try this search: index=_internal sourcetype=splunkd save_image_and_icon_on_install.py Possible outcomes in Splunk Enterprise 9.1.1 / Dashboard Studio 1.11.6: 1. Unable to fetch kvstore status response: {}, content: {} Check kvstore status, splunkd.log, and mongod.log. 2. kvstore current status is {}. Exiting now. Check kvstore status, splunkd.log, and mongod.log. 3. Icons of {} version {} are already stored in kvstore collection. Skipping now and exiting. You can ignore this message; I'm not sure why Splunk used exit(1) in this case. You can configure the ES Audit - Script Errors health check to ignore the script failure, but you risk ignoring valid save_image_and_icon_on_install.py issues as well. See <https://docs.splunk.com/Documentation/ES/latest/Admin/Troubleshootscripterrors#Customize_messages_about_specific_scripts>. This specific save_image_and_icon_on_install.py use case is even covered in the documentation. 4. kvstore status is {}, exiting now. This exits with code 0 and won't generate a system message; however, check kvstore status, splunkd.log, and mongod.log. 5. Failed to connect to splunkd. You're unlike to see this, but if you do, check splunkd.log and python.log. 6. Failed to save icons to kvstore due to an error. Check kvstore status, splunkd.log, and mongod.log.
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values ... See more...
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values of the object that has the array with the most data. I better explain myself with the following example: This is the JSON code that comes in each event:       { "investigationStatus":"New", "status":1, "priorityScore":38, "workbenchName":"PSEXEC Execution By Process", "workbenchId":"WB-18286-20231106-00005", "severity":"low", "caseId":null, "detail":{ "schemaVersion":"1.14", "alertProvider":"SAE", "description":"PSEXEC execution to start remote process", "impactScope":[ { "entityValue":{ "name":"SERVER01", }, "relatedIndicators":[ 2 ] }, { "entityValue":{ "name":"SERVER02", }, "relatedIndicators":[ 2, 3 ] }, { "entityValue":{ "name":"SERVER03", }, "relatedIndicators":[ 1, 2, 3, 4 ] }, { "entityValue":{ "name":"SERVER04", }, "relatedIndicators":[ 1 ] } ] } }       And this is the table I'm trying to get: workbenchId workbenchName  severity  name_host "WB-18286-20231106-00005" "PSEXEC Execution By Process" "low" "SERVER03"   If you can see, the values of the 1st level of the JSON are found, and then there is the host_name SERVER03, since this has the largest number of values in the "relatedIndicators" array (from 1 to 4), the rest of the servers do not because they have smaller amount in the array. Maybe any idea how I could achieve it? I tried with json_extract but didn't succeed  
Hi, Remove keeporphans=true from your transaction command and add keepevicted=true. Without a field to identify unique instances of jobs, you can't know which of multiple START events correlates to... See more...
Hi, Remove keeporphans=true from your transaction command and add keepevicted=true. Without a field to identify unique instances of jobs, you can't know which of multiple START events correlates to a single COMPLETE event: T0 aJobName2 START ``` A ``` T1 aJobName2 START ``` B ``` T2 aJobName2 COMPLETE Which instance of aJobName2 completed, A or B? The transaction command with keepevicted=true startswith=START endswith=COMPLETE will create two transactions: T0 aJobName2 START closed_txn=0 duration=0 T1 ajobName2 START ajobName2 COMPLETE closed_txn=1 duration=T2-T1 Is there a job number, process identifier, etc. in your _raw event?
Thank you very much for the solution!
#Move files for temporary backup. you can delete temporary backup files after veriying cert is renewed.  #To renew default splunk web cert   mv $SPLUNK_HOME/etc/auth/splunkweb/cert.pem $SPLUNK_H... See more...
#Move files for temporary backup. you can delete temporary backup files after veriying cert is renewed.  #To renew default splunk web cert   mv $SPLUNK_HOME/etc/auth/splunkweb/cert.pem $SPLUNK_HOME/etc/auth/splunkweb/cert.pem.bk mv $SPLUNK_HOME/etc/auth/splunkweb/privkey.pem $SPLUNK_HOME/etc/auth/splunkweb/privkey.pem.bk   #If the cert was generated at the time of Splunk installation, splunkd cert might have been also due to expiry. you must renew this certificate if your Splunk enterprise is running KVStore.  #To renew splunkd port ( 8089)   mv $SPLUNK_HOME/etc/auth/server.pem mv $SPLUNK_HOME/etc/auth/server.pem_bk   restart splunk.