All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If that is the exact regex and you are talking about using the rex command, then    | rex "(?<new_field>(?<=\:\[)(.*)(?=\]))"   will extract the data between the [] into new_field
When writing regex, where in the regex string am I supposed to add the (?<new_field>) string ? I have included a sample regex string below, where in this string would I add (?<new_field>) ? (?<=\:\... See more...
When writing regex, where in the regex string am I supposed to add the (?<new_field>) string ? I have included a sample regex string below, where in this string would I add (?<new_field>) ? (?<=\:\[)(.*)(?=\]) Thanks !
Hi in many cases if you haven't done data onboarding correctly and setting TIME_FORMAT correctly Splunk can decide that 05/03/2024 is actually 3rd of May 2024 not 5th or March 2024. To check this y... See more...
Hi in many cases if you haven't done data onboarding correctly and setting TIME_FORMAT correctly Splunk can decide that 05/03/2024 is actually 3rd of May 2024 not 5th or March 2024. To check this you need to look if those events are in future. That needs that you add correct end data or actually enough long span into future e.g. latest=+10mon in your SPL query. You can also check if there is issues on those date parsing on MC and/or from internal logs. r. Ismo
Hour (7-21,0-9,12-21):  The range 7-21 includes hours from 7 AM to 9 PM. The additional ranges 0-9 and 12-21 ensure that hours from midnight to 9 AM and from noon to 9 PM are also included. Therefore... See more...
Hour (7-21,0-9,12-21):  The range 7-21 includes hours from 7 AM to 9 PM. The additional ranges 0-9 and 12-21 ensure that hours from midnight to 9 AM and from noon to 9 PM are also included. Therefore, the cron job runs every minute from 7 AM to 9 PM, excluding 10 PM and 11 PM.
@Santosh2 Can you try this  : * 7-21,0-9,12-21 * * *
Thanks, You are right!  Need to use back quotes
Hi All! Hope all is well. I am about to pull my hair out trying to override a sourcetype for a specific set of tcp network events. The event starts with the same string of 'acl_policy_name' and it ... See more...
Hi All! Hope all is well. I am about to pull my hair out trying to override a sourcetype for a specific set of tcp network events. The event starts with the same string of 'acl_policy_name' and it is currently being labeled with a sourcetype of 'f5:bigip:syslog'. I want to override that sourcetype with a new one labeled 'f5:bigip:afm:syslog' however, even after modifying the props and transforms conf files: still no dice. I used regex101 to ensure that the regex for the 'acl_policy_name' match is correct but I've gone through enough articles and Splunk documentation to no avail. Nothing in the btools outputs for it looks out of place or as though it could be interfering with the settings below. Any thoughts or suggestions would be greatly appreciated before I throw my laptop off a cliff. Thanks in advance! Event Snippet: Inputs.conf [tcp://9515] disabled = false connection_host = ip sourcetype = f5:bigip:syslog index = f5_cs_p_p Props.conf [f5:bigip:syslog] TRANSFORMS-afm_sourcetype = afm-sourcetype *Note I also tried [source::tcp:9515] as a spec instead of the sourcetype but no dice either way. Transforms.conf [afm-sourcetype] REGEX = ^acl_policy_name="$ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::f5:bigip:afm:syslog WRITE_META = true    
https://crontab.guru/#0_0-21,23_*_*_*  0 0-21,23 * * *
Hi all, I have been looking at my Splunk CMC for a customer and have noticed that the ingest per day has been up and down since early November, I have had a look at the CMC (cloud monitoring console)... See more...
Hi all, I have been looking at my Splunk CMC for a customer and have noticed that the ingest per day has been up and down since early November, I have had a look at the CMC (cloud monitoring console) but for some tabs, the graphs shown by default won't let me go back to November to find trends such as "daily event count per day in November"   Could someone guide me on why this is & what would be a good place to start on this investigation. For context:   Arch is: UF --> HF --> SC SC4S --> SC Cloud data --> HF --> SC
You have now used double quotes - try back quotes ` Put your cursor in the search window and press <ctrl><shift>E keys together
@richgalloway or @General_Talos  How do we need to hide the app that is downloaded from Splunk base to be viewed by other customers through source control like Git? I see a file with default.meta... See more...
@richgalloway or @General_Talos  How do we need to hide the app that is downloaded from Splunk base to be viewed by other customers through source control like Git? I see a file with default.meta where it says [] owner =admin access =  read : [*], write : [ admin ] export = system   Thanks
Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted pu... See more...
Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted purposely in respect to the organization): How many of each user category authentication attempt exist for all successful authentications?     Would someone be able to assist me with a general start for how I  would write up my search to look for this kind of info?
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_... See more...
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_master_internal source=*license_usage.log type=Usage pool=engineering_pool | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | where like(h, "%metfarm%" ) OR like(h, "%scale%") |eval h=rtrim(h,".eng.ssnsgs.net") |eval env=split(h,"-") |eval env=mvindex(env,1) |eval env=if (like(env,"metfarm%"),"metfarm",env) |eval env=if (like(env,"sysperf%"),"95x",env) |eval env=if(like(env,"gs02"),"tscale",env) | timechart span=1d sum(b) as b by env | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] |addtotals
Thank you.   index=<value> source=<sourcePath.log> host=<value> | <evalQueryGiven> vs index=<sameValue> source=<splunkForwarderPath.log> host=<sameValue> | <evalQueryGiven>     [SourceLogs vs... See more...
Thank you.   index=<value> source=<sourcePath.log> host=<value> | <evalQueryGiven> vs index=<sameValue> source=<splunkForwarderPath.log> host=<sameValue> | <evalQueryGiven>     [SourceLogs vs Summary logs from SplunkForwarder] [Last 15mins] 250K events vs 82K events.  [Time difference]  -0.023 vs -0.77 at lowest  -0.894 vs 1.14 at highest Missing log from source had time definition (example: 06/Mar/2024:10:08:17.894). I couldn't say if this is a queue problem? 
Curious, did you ever find a fix for this?
If I want to exclude only 10pm then what will be the cron job
cant post links so just search for freeload101 github for updated code #!/bin/bash ########################## FUNC function UFYUM(){ cd /tmp rpm -Uvh --nodeps `curl -s https://www.splunk.com/e... See more...
cant post links so just search for freeload101 github for updated code #!/bin/bash ########################## FUNC function UFYUM(){ cd /tmp rpm -Uvh --nodeps `curl -s https://www.splunk.com/en_us/download/universal-forwarder.html\?locale\=en_us | grep -oP '"https:.*(?<=download).*x86_64.rpm"' |sed 's/\"//g' | head -n 1` yum -y install splunkforwarder.x86_64 sleep 5 } function UFDEB(){ cd /tmp wget `curl -s https://www.splunk.com/en_us/download/universal-forwarder.html\?locale\=en_us | grep -oP '"https:.*(?<=download).*amd64.deb"' |sed 's/\"//g' | head -n 1` -O amd64.deb dpkg -i amd64.deb sleep 5 } function UFConf(){ mkdir -p /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/ cd /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/ cat <<EOF> /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/app.conf [install] state = enabled [package] check_for_updates = false [ui] is_visible = false is_manageable = false EOF cat <<EOF> /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/deploymentclient.conf [deployment-client] phoneHomeIntervalInSecs = 60 [target-broker:deploymentServer] targetUri = XXXXXXXXXXXXXXXXXXXXXXX:8089 EOF cat <<EOF> /opt/splunkforwarder/etc/system/local/user-seed.conf [user_info] USERNAME = admin PASSWORD = XXXXXXXXXXXXXXXXXXXXXXXX EOF /opt/splunkforwarder/bin/splunk cmd btool deploymentclient list --debug /opt/splunkforwarder/bin/splunk start --accept-license } ######################################################### MAIN # Check for RPM package managers if command -v yum > /dev/null; then UFYUM UFConf else echo "No YUM package manager found." fi # Check for DEB package managers if command -v dpkg > /dev/null; then UFDEB UFConf else echo "No DEB package manager found." fi
I changed to  ....| eval log_info=_raw | "securemsg(log_info)" | ..., but got the same error how to use  <ctrl><shift>E to expand the macro? Thanks
I doubt if Splunk has truly extracted JSON array content.payload{}.  As you observed, Splunk gives you a flattened structure of the array.  As @gcusello said, spath is the right tool.  The syntax is ... See more...
I doubt if Splunk has truly extracted JSON array content.payload{}.  As you observed, Splunk gives you a flattened structure of the array.  As @gcusello said, spath is the right tool.  The syntax is   | spath content.payload{} | mvexpand content.payload{}   Normally, you can then continue to use spath to extract content.payload{} after this.  But your data has another layer of array.  That's not usually a problem.  But then, your developers did you a great injustice by using actual data values (e.g., "GL Import flow processing results") as JSON key.  Not only is this data, but the key name included major SPL breakers.  I haven't found a method to use spath to handle this.  If you have any influence over your developers, insist that they change "GL Import flow processing results" to a value and assign it an appropriate key such as "workflow".  Otherwise, your trouble will be endless. Luckily, Splunk introduced from_json in 9.0.  If you use 9+, you can work around this temporarily before your developers take action.   | spath path=content.payload{} | mvexpand content.payload{} | fromjson content.payload{} | mvexpand "GL Import flow processing results"   You sample data should give you GL Import flow processing results content.payload{} {"concurBatchId":"4","batchId":"6","count":"50","impConReqId":"1","errorMessage":null,"filename":"CONCUR_GL.csv"} { "GL Import flow processing results" : [ { "concurBatchId" : "4", "batchId" : "6", "count" : "50", "impConReqId" : "1", "errorMessage" : null, "filename" : "CONCUR_GL.csv" } ] }   AP Import flow related results : Extract has no AP records to Import into Oracle (Scroll right to see other columns.) This is an emulation for you to play with and compare with real data   | makeresults | eval _raw = "{ \"content\" : { \"jobName\" : \"AP2\", \"region\" : \"NA\", \"payload\" : [ { \"GL Import flow processing results\" : [ { \"concurBatchId\" : \"4\", \"batchId\" : \"6\", \"count\" : \"50\", \"impConReqId\" : \"1\", \"errorMessage\" : null, \"filename\" : \"CONCUR_GL.csv\" } ] }, \"AP Import flow related results : Extract has no AP records to Import into Oracle\" ] } }" ``` data emulation above ```    
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover t... See more...
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover the two nodes automatically. According to the documentation, Configure Microsoft SQL Server Collectors (appdynamics.com) To enable monitoring of all the nodes, you must enable the dbagent.mssql.cluster.discovery.enabled property either at the Controller level or at the agent level. I am running the following: $ nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=DBMon-Agent -jar db-agent.jar -Ddbagent.mssql.cluster.discovery.enabled=true & But it doesn't work when I configure the collector with the AG Listener IP. I also get the below: `Is Failover Cluster Discovery Enabled: False`! I have added dbagent.mssql.cluster.discovery.enabled though!? What could I possibly be doing wrong? Thank you