All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All! Hope all is well. I am about to pull my hair out trying to override a sourcetype for a specific set of tcp network events. The event starts with the same string of 'acl_policy_name' and it ... See more...
Hi All! Hope all is well. I am about to pull my hair out trying to override a sourcetype for a specific set of tcp network events. The event starts with the same string of 'acl_policy_name' and it is currently being labeled with a sourcetype of 'f5:bigip:syslog'. I want to override that sourcetype with a new one labeled 'f5:bigip:afm:syslog' however, even after modifying the props and transforms conf files: still no dice. I used regex101 to ensure that the regex for the 'acl_policy_name' match is correct but I've gone through enough articles and Splunk documentation to no avail. Nothing in the btools outputs for it looks out of place or as though it could be interfering with the settings below. Any thoughts or suggestions would be greatly appreciated before I throw my laptop off a cliff. Thanks in advance! Event Snippet: Inputs.conf [tcp://9515] disabled = false connection_host = ip sourcetype = f5:bigip:syslog index = f5_cs_p_p Props.conf [f5:bigip:syslog] TRANSFORMS-afm_sourcetype = afm-sourcetype *Note I also tried [source::tcp:9515] as a spec instead of the sourcetype but no dice either way. Transforms.conf [afm-sourcetype] REGEX = ^acl_policy_name="$ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::f5:bigip:afm:syslog WRITE_META = true    
https://crontab.guru/#0_0-21,23_*_*_*  0 0-21,23 * * *
Hi all, I have been looking at my Splunk CMC for a customer and have noticed that the ingest per day has been up and down since early November, I have had a look at the CMC (cloud monitoring console)... See more...
Hi all, I have been looking at my Splunk CMC for a customer and have noticed that the ingest per day has been up and down since early November, I have had a look at the CMC (cloud monitoring console) but for some tabs, the graphs shown by default won't let me go back to November to find trends such as "daily event count per day in November"   Could someone guide me on why this is & what would be a good place to start on this investigation. For context:   Arch is: UF --> HF --> SC SC4S --> SC Cloud data --> HF --> SC
You have now used double quotes - try back quotes ` Put your cursor in the search window and press <ctrl><shift>E keys together
@richgalloway or @General_Talos  How do we need to hide the app that is downloaded from Splunk base to be viewed by other customers through source control like Git? I see a file with default.meta... See more...
@richgalloway or @General_Talos  How do we need to hide the app that is downloaded from Splunk base to be viewed by other customers through source control like Git? I see a file with default.meta where it says [] owner =admin access =  read : [*], write : [ admin ] export = system   Thanks
Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted pu... See more...
Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted purposely in respect to the organization): How many of each user category authentication attempt exist for all successful authentications?     Would someone be able to assist me with a general start for how I  would write up my search to look for this kind of info?
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_... See more...
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_master_internal source=*license_usage.log type=Usage pool=engineering_pool | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | where like(h, "%metfarm%" ) OR like(h, "%scale%") |eval h=rtrim(h,".eng.ssnsgs.net") |eval env=split(h,"-") |eval env=mvindex(env,1) |eval env=if (like(env,"metfarm%"),"metfarm",env) |eval env=if (like(env,"sysperf%"),"95x",env) |eval env=if(like(env,"gs02"),"tscale",env) | timechart span=1d sum(b) as b by env | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] |addtotals
Thank you.   index=<value> source=<sourcePath.log> host=<value> | <evalQueryGiven> vs index=<sameValue> source=<splunkForwarderPath.log> host=<sameValue> | <evalQueryGiven>     [SourceLogs vs... See more...
Thank you.   index=<value> source=<sourcePath.log> host=<value> | <evalQueryGiven> vs index=<sameValue> source=<splunkForwarderPath.log> host=<sameValue> | <evalQueryGiven>     [SourceLogs vs Summary logs from SplunkForwarder] [Last 15mins] 250K events vs 82K events.  [Time difference]  -0.023 vs -0.77 at lowest  -0.894 vs 1.14 at highest Missing log from source had time definition (example: 06/Mar/2024:10:08:17.894). I couldn't say if this is a queue problem? 
Curious, did you ever find a fix for this?
If I want to exclude only 10pm then what will be the cron job
cant post links so just search for freeload101 github for updated code #!/bin/bash ########################## FUNC function UFYUM(){ cd /tmp rpm -Uvh --nodeps `curl -s https://www.splunk.com/e... See more...
cant post links so just search for freeload101 github for updated code #!/bin/bash ########################## FUNC function UFYUM(){ cd /tmp rpm -Uvh --nodeps `curl -s https://www.splunk.com/en_us/download/universal-forwarder.html\?locale\=en_us | grep -oP '"https:.*(?<=download).*x86_64.rpm"' |sed 's/\"//g' | head -n 1` yum -y install splunkforwarder.x86_64 sleep 5 } function UFDEB(){ cd /tmp wget `curl -s https://www.splunk.com/en_us/download/universal-forwarder.html\?locale\=en_us | grep -oP '"https:.*(?<=download).*amd64.deb"' |sed 's/\"//g' | head -n 1` -O amd64.deb dpkg -i amd64.deb sleep 5 } function UFConf(){ mkdir -p /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/ cd /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/ cat <<EOF> /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/app.conf [install] state = enabled [package] check_for_updates = false [ui] is_visible = false is_manageable = false EOF cat <<EOF> /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/deploymentclient.conf [deployment-client] phoneHomeIntervalInSecs = 60 [target-broker:deploymentServer] targetUri = XXXXXXXXXXXXXXXXXXXXXXX:8089 EOF cat <<EOF> /opt/splunkforwarder/etc/system/local/user-seed.conf [user_info] USERNAME = admin PASSWORD = XXXXXXXXXXXXXXXXXXXXXXXX EOF /opt/splunkforwarder/bin/splunk cmd btool deploymentclient list --debug /opt/splunkforwarder/bin/splunk start --accept-license } ######################################################### MAIN # Check for RPM package managers if command -v yum > /dev/null; then UFYUM UFConf else echo "No YUM package manager found." fi # Check for DEB package managers if command -v dpkg > /dev/null; then UFDEB UFConf else echo "No DEB package manager found." fi
I changed to  ....| eval log_info=_raw | "securemsg(log_info)" | ..., but got the same error how to use  <ctrl><shift>E to expand the macro? Thanks
I doubt if Splunk has truly extracted JSON array content.payload{}.  As you observed, Splunk gives you a flattened structure of the array.  As @gcusello said, spath is the right tool.  The syntax is ... See more...
I doubt if Splunk has truly extracted JSON array content.payload{}.  As you observed, Splunk gives you a flattened structure of the array.  As @gcusello said, spath is the right tool.  The syntax is   | spath content.payload{} | mvexpand content.payload{}   Normally, you can then continue to use spath to extract content.payload{} after this.  But your data has another layer of array.  That's not usually a problem.  But then, your developers did you a great injustice by using actual data values (e.g., "GL Import flow processing results") as JSON key.  Not only is this data, but the key name included major SPL breakers.  I haven't found a method to use spath to handle this.  If you have any influence over your developers, insist that they change "GL Import flow processing results" to a value and assign it an appropriate key such as "workflow".  Otherwise, your trouble will be endless. Luckily, Splunk introduced from_json in 9.0.  If you use 9+, you can work around this temporarily before your developers take action.   | spath path=content.payload{} | mvexpand content.payload{} | fromjson content.payload{} | mvexpand "GL Import flow processing results"   You sample data should give you GL Import flow processing results content.payload{} {"concurBatchId":"4","batchId":"6","count":"50","impConReqId":"1","errorMessage":null,"filename":"CONCUR_GL.csv"} { "GL Import flow processing results" : [ { "concurBatchId" : "4", "batchId" : "6", "count" : "50", "impConReqId" : "1", "errorMessage" : null, "filename" : "CONCUR_GL.csv" } ] }   AP Import flow related results : Extract has no AP records to Import into Oracle (Scroll right to see other columns.) This is an emulation for you to play with and compare with real data   | makeresults | eval _raw = "{ \"content\" : { \"jobName\" : \"AP2\", \"region\" : \"NA\", \"payload\" : [ { \"GL Import flow processing results\" : [ { \"concurBatchId\" : \"4\", \"batchId\" : \"6\", \"count\" : \"50\", \"impConReqId\" : \"1\", \"errorMessage\" : null, \"filename\" : \"CONCUR_GL.csv\" } ] }, \"AP Import flow related results : Extract has no AP records to Import into Oracle\" ] } }" ``` data emulation above ```    
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover t... See more...
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover the two nodes automatically. According to the documentation, Configure Microsoft SQL Server Collectors (appdynamics.com) To enable monitoring of all the nodes, you must enable the dbagent.mssql.cluster.discovery.enabled property either at the Controller level or at the agent level. I am running the following: $ nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=DBMon-Agent -jar db-agent.jar -Ddbagent.mssql.cluster.discovery.enabled=true & But it doesn't work when I configure the collector with the AG Listener IP. I also get the below: `Is Failover Cluster Discovery Enabled: False`! I have added dbagent.mssql.cluster.discovery.enabled though!? What could I possibly be doing wrong? Thank you
0 0-21 * * *
Hi @kiran_panchavat , yes your are correct but my requirement is in 24 hours I  don’t want to receive the alert at 10pm, 11pm only. how can I do that????
Thank you for illustrating input in text format.  But please make sure JSON is conformant when doing mockups. Speaking of JSON, I always say do not treat structured data as text.  regex is not a sui... See more...
Thank you for illustrating input in text format.  But please make sure JSON is conformant when doing mockups. Speaking of JSON, I always say do not treat structured data as text.  regex is not a suitable tool for structured data in most cases.  Splunk's robust, QA tested tool will save you countless hours down the road.  Traditional tool for this is spath.  Since 9.0, Splunk also added fromjson that can simplify this work.  I'll begin with the simpler one.  You didn't say which field the JSON is in, so I'll assume that's _raw in the following.   | fromjson _raw | mvexpand Field1 | fromjson Field1    This gives you Field1 id name occupation {"id":1234,"name":"John"} 1234 John   {"id":5678,"name":"Mary","occupation":{"title":"lawyer","employer":"law firm"}} 5678 Mary {"title":"lawyer","employer":"law firm"} The spath alternative is - again assuming JSON is in _raw   | spath path=Field1{} | mvexpand Field1{} | spath input=Field1{}   This gives Field1{} id name occupation.employer occupation.title { "id": 1234, "name": "John" } 1234 John     { "id": 5678, "name": "Mary", "occupation": { "title": "lawyer", "employer": "law firm" } } 5678 Mary law firm lawyer There  can be many variants in between.  But the essence is to extract elements of the JSON array, then handle the array as a multivalue field as a whole.  If, for example, there are too many elements and you worry about RAM, you can use mvfilter to get data about Mary as you are not interested in other entries:   | fromjson _raw | eval of_interest = mvfilter(json_extract(Field1, "name") == "Mary")   (Note you need 8.0 to use json_extract.) You get Field1 of_interest {"id":1234,"name":"John"} {"id":5678,"name":"Mary","occupation":{"title":"lawyer","employer":"law firm"}} {"id":5678,"name":"Mary","occupation":{"title":"lawyer","employer":"law firm"}} Hope this helps. By the way, the conformant form of your mock data is   { "Field1" : [ { "id": 1234, "name": "John" }, { "id": 5678, "name": "Mary", "occupation": { "title": "lawyer", "employer": "law firm" } } ] }   You can play with the following emulation and compare with real data   | makeresults | eval _raw = "{ \"Field1\" : [ { \"id\": 1234, \"name\": \"John\" }, { \"id\": 5678, \"name\": \"Mary\", \"occupation\": { \"title\": \"lawyer\", \"employer\": \"law firm\" } } ] }" ``` data emulation above ```    
It looks like you are using single quotes around the macro rather than backquotes Are you sure the macro expands correctly - try using <ctrl><shift>E to expand the macro
| spath Field1{} output=Field1 | mvexpand Field1 | spath input=Field1 occupation | where isnotnull(occupation) | spath input=Field1 name | table name
Thank you for the feedback!  I will take your suggestions into consideration!