All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@SanjayReddy Thanks for your response, I just mentioned the log format. Actually the log file is recent, new file will be generated everyday filename.<date> I updated my post as well. 
Hi @iamsplunker  from inputs.conf and log file last modified, there is an issue I see  as log file modified last month and in inputs.conf you mentioned ignoreOlderThan = 7d  Splunk will igno... See more...
Hi @iamsplunker  from inputs.conf and log file last modified, there is an issue I see  as log file modified last month and in inputs.conf you mentioned ignoreOlderThan = 7d  Splunk will ignore log files which are modified more than 7 days ago. I would suggest comment ignoreOlderThan = 7d  for first time and restart splunkd ,  once splunk reads older file then you can comment again.
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splun... See more...
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splunk offline on 3 indexers at a time before they are shut down?   Thanks
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timesta... See more...
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds The above query can return multiple events with the same event_id & process_id with different posted_ timestamp I need to only return the one with the earliest/oldest posted_time(one of the fields in the event). How can i change the above query to accomplish this? Thanks!
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [moni... See more...
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [monitor:///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.*] sourcetype = banana _meta=Appid::APP-1234 DataClassification::Unclassified index = test disabled = 0 crcSalt = <SOURCE> ignoreOlderThan = 7d The host(s) are sending _internal logs to Splunk, Here is the info I see in splunkd.log no errors, I tried the wildcard (*) in the monitoring stanza at the end after /output dir however it didn't work TailingProcessor [ MainTailingThread] - Parsing configuration stanza: monitor :///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.* Actual log file  -rw-r--r--1 automat autouser 6184 Oct 8 00:00 event_circuit.ABD.11082023      
Load balancers help the SHC perform better by evenly distributing users among all nodes.
thanks, for your support After i reinstalled UF it's started work.. my issue has been fixed  
Hi Cansel, A Spring boot application in Kubernetes pod is brought up with javaagent of appdynamics to track the metrics. The base image of container has Azul Zulu Java 1.8 and we need compatible ve... See more...
Hi Cansel, A Spring boot application in Kubernetes pod is brought up with javaagent of appdynamics to track the metrics. The base image of container has Azul Zulu Java 1.8 and we need compatible version of appdynamics javaagent.jar to get it working. We have license and have a SaaS controller setup to track the events. Thanks, Shwetha
Hi @spy_jr, Splunk may fail to process your _raw JSON as-is. There is an extra comma after the name string value of each entityValue object, e.g. after "name": "SERVER01": { "entityValue": { ... See more...
Hi @spy_jr, Splunk may fail to process your _raw JSON as-is. There is an extra comma after the name string value of each entityValue object, e.g. after "name": "SERVER01": { "entityValue": { "name": "SERVER01", }, "relatedIndicators": [ 2 ] }, After correcting that, you can extract the values you need using JSON and multivalue eval functions: | spath | eval impactScope_mv=json_array_to_mv(json_extract(_raw, "detail.impactScope{}"), false()) | eval relatedIndicators_mvcount=mvmap(impactScope_mv, mvcount(json_array_to_mv(json(json_extract(impactScope_mv, "relatedIndicators")), false()))) | eval relatedIndicators_mvindex=mvfind(relatedIndicators_mvcount, max(relatedIndicators_mvcount)) | eval impactScope_name=json_extract(mvindex(impactScope_mv, relatedIndicators_mvindex), "entityValue.name") | table workbenchId workbenchName severity impactScope_name Note that ties--two or more arrays of equal length--will return the first entry. In a nutshell, we: 1. Convert the detail.impactScope{} array in a multivalued field. 2. For each entry of the impactScope array, convert the relatedIndicators array to a multivalued field and store the count of entries in a new multivalued field. 3. Find the index of the largest value (the longest array). 4. Extract the entityValue.name value from the impactScope_mv multivalued field using the index identified in step 3. 5. Display a table using the fields extracted by spath and the field created in step 4.
Hi @gcusello, You can modify the serverName field in $SPLUNK_HOME/etc/apps/splunk_monitoring_console/lookups/assets.csv; however, the value will be set back to the real instance name whenever Monito... See more...
Hi @gcusello, You can modify the serverName field in $SPLUNK_HOME/etc/apps/splunk_monitoring_console/lookups/assets.csv; however, the value will be set back to the real instance name whenever Monitoring Console General Setup changes are saved or the DMC Asset - * searches are executed.
Hi @AL3Z, The cause depends on the output of save_image_and_icon_on_install.py. Try this search: index=_internal sourcetype=splunkd save_image_and_icon_on_install.py Possible outcomes in Splunk E... See more...
Hi @AL3Z, The cause depends on the output of save_image_and_icon_on_install.py. Try this search: index=_internal sourcetype=splunkd save_image_and_icon_on_install.py Possible outcomes in Splunk Enterprise 9.1.1 / Dashboard Studio 1.11.6: 1. Unable to fetch kvstore status response: {}, content: {} Check kvstore status, splunkd.log, and mongod.log. 2. kvstore current status is {}. Exiting now. Check kvstore status, splunkd.log, and mongod.log. 3. Icons of {} version {} are already stored in kvstore collection. Skipping now and exiting. You can ignore this message; I'm not sure why Splunk used exit(1) in this case. You can configure the ES Audit - Script Errors health check to ignore the script failure, but you risk ignoring valid save_image_and_icon_on_install.py issues as well. See <https://docs.splunk.com/Documentation/ES/latest/Admin/Troubleshootscripterrors#Customize_messages_about_specific_scripts>. This specific save_image_and_icon_on_install.py use case is even covered in the documentation. 4. kvstore status is {}, exiting now. This exits with code 0 and won't generate a system message; however, check kvstore status, splunkd.log, and mongod.log. 5. Failed to connect to splunkd. You're unlike to see this, but if you do, check splunkd.log and python.log. 6. Failed to save icons to kvstore due to an error. Check kvstore status, splunkd.log, and mongod.log.
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values ... See more...
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values of the object that has the array with the most data. I better explain myself with the following example: This is the JSON code that comes in each event:       { "investigationStatus":"New", "status":1, "priorityScore":38, "workbenchName":"PSEXEC Execution By Process", "workbenchId":"WB-18286-20231106-00005", "severity":"low", "caseId":null, "detail":{ "schemaVersion":"1.14", "alertProvider":"SAE", "description":"PSEXEC execution to start remote process", "impactScope":[ { "entityValue":{ "name":"SERVER01", }, "relatedIndicators":[ 2 ] }, { "entityValue":{ "name":"SERVER02", }, "relatedIndicators":[ 2, 3 ] }, { "entityValue":{ "name":"SERVER03", }, "relatedIndicators":[ 1, 2, 3, 4 ] }, { "entityValue":{ "name":"SERVER04", }, "relatedIndicators":[ 1 ] } ] } }       And this is the table I'm trying to get: workbenchId workbenchName  severity  name_host "WB-18286-20231106-00005" "PSEXEC Execution By Process" "low" "SERVER03"   If you can see, the values of the 1st level of the JSON are found, and then there is the host_name SERVER03, since this has the largest number of values in the "relatedIndicators" array (from 1 to 4), the rest of the servers do not because they have smaller amount in the array. Maybe any idea how I could achieve it? I tried with json_extract but didn't succeed  
Hi, Remove keeporphans=true from your transaction command and add keepevicted=true. Without a field to identify unique instances of jobs, you can't know which of multiple START events correlates to... See more...
Hi, Remove keeporphans=true from your transaction command and add keepevicted=true. Without a field to identify unique instances of jobs, you can't know which of multiple START events correlates to a single COMPLETE event: T0 aJobName2 START ``` A ``` T1 aJobName2 START ``` B ``` T2 aJobName2 COMPLETE Which instance of aJobName2 completed, A or B? The transaction command with keepevicted=true startswith=START endswith=COMPLETE will create two transactions: T0 aJobName2 START closed_txn=0 duration=0 T1 ajobName2 START ajobName2 COMPLETE closed_txn=1 duration=T2-T1 Is there a job number, process identifier, etc. in your _raw event?
Thank you very much for the solution!
#Move files for temporary backup. you can delete temporary backup files after veriying cert is renewed.  #To renew default splunk web cert   mv $SPLUNK_HOME/etc/auth/splunkweb/cert.pem $SPLUNK_H... See more...
#Move files for temporary backup. you can delete temporary backup files after veriying cert is renewed.  #To renew default splunk web cert   mv $SPLUNK_HOME/etc/auth/splunkweb/cert.pem $SPLUNK_HOME/etc/auth/splunkweb/cert.pem.bk mv $SPLUNK_HOME/etc/auth/splunkweb/privkey.pem $SPLUNK_HOME/etc/auth/splunkweb/privkey.pem.bk   #If the cert was generated at the time of Splunk installation, splunkd cert might have been also due to expiry. you must renew this certificate if your Splunk enterprise is running KVStore.  #To renew splunkd port ( 8089)   mv $SPLUNK_HOME/etc/auth/server.pem mv $SPLUNK_HOME/etc/auth/server.pem_bk   restart splunk. 
You could just create one event instead of three, or in the example, just return the first event: | head 1 If you're working with ISO time strings but unknown times in an unknown order, you can s... See more...
You could just create one event instead of three, or in the example, just return the first event: | head 1 If you're working with ISO time strings but unknown times in an unknown order, you can sort lexicographically: | sort time_1 | head 1 If the time format is known but not necessarily in ISO format, you can convert time_1 to an epoch value using the appropriate format string (still ISO in this example) and sort the result: | eval time_1_epoch=strptime(time_1, "%Y-%m-%dT%H:%M:%S%Z") | sort time_1_epoch | head 1 If multiple events have the same time_1 value, you can use eventstats and where: | eval time_1_epoch=strptime(time_1, "%Y-%m-%dT%H:%M:%S%Z") | eventstats min(time_1_epoch) as min_time_1 | where time_1_epoch==min_time_1
Hi @Praz_123 , Can you check your authentication.conf file and confirm you are using LDAP and also start checking the splunkd.log and see what errors appears related to login to debug further. 
Hi Richgalloway, Thanks for your reply. I consider HAProxy for that because I've had some experience about it. But I want to add LB several month later. does it maybe have some bad effect on the cl... See more...
Hi Richgalloway, Thanks for your reply. I consider HAProxy for that because I've had some experience about it. But I want to add LB several month later. does it maybe have some bad effect on the cluster performance? or create some fault on the search process?  
Hi Justyna, What is your UBuntu server's CPU architect? Did you control it, is it supported or not? Or did you check  your server Vis agent suitable for your OS/CPU architect? Thanks Cansel
Hi Yann, You can use ADQL in order to create your own metric like this below in screenshot In your example, you can easily count your  "status" field with "Failed" message . Based on this count ... See more...
Hi Yann, You can use ADQL in order to create your own metric like this below in screenshot In your example, you can easily count your  "status" field with "Failed" message . Based on this count you create alert or something like that. Thanks Cansel