All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunkerhtml, may i know, after creating the token, did you do copy-paste ?!?! after pasting, maybe, thee is a chance that, you included a space and entered ?!?! (many times many of my friends f... See more...
Hi @splunkerhtml, may i know, after creating the token, did you do copy-paste ?!?! after pasting, maybe, thee is a chance that, you included a space and entered ?!?! (many times many of my friends faced this issue!) just double check the token created and copy pasted, then update us, thanks.  or, is this a production project?.. then you may contact Splunk Cloud Support. they should be able to help you.  Upvotes / karma points are appreciated by everybody, thanks. 
You need to illustrate actual data (column format or raw, in text, anonymize as needed). Then, explain which command in your search "deducts" (I assume it means to remove) said events?  I don't see a... See more...
You need to illustrate actual data (column format or raw, in text, anonymize as needed). Then, explain which command in your search "deducts" (I assume it means to remove) said events?  I don't see any logic to eliminate "user.lifecycle.delete.completed".  Also, how does this string relate to data fields?
What's the query and data that this comes from?
Your `indextime` is a macro and it's expansion does not work with the >=$info_min_time$ Other points: The documentation says that map does not work after appendpipe or append commands - see Known l... See more...
Your `indextime` is a macro and it's expansion does not work with the >=$info_min_time$ Other points: The documentation says that map does not work after appendpipe or append commands - see Known limitations https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/map Your use of appendpipe in this example is odd in that it does nothing - I assume this is from some more complete search This search is probably NOT the way you want to do what you are trying to do - given your maxsearch=20000, this may take forever to run if you really have that many searches to run sequentially. Perhaps you can say what you're trying to achieve as map seems as though it may not be the solution for your scenario.  
Thank you dtburrows3. I thought the same thing but didn't know how to find what was being loaded on the Cloud. I don't know of a btool option in the cloud for a custom app. Our developer created this... See more...
Thank you dtburrows3. I thought the same thing but didn't know how to find what was being loaded on the Cloud. I don't know of a btool option in the cloud for a custom app. Our developer created this custom app with everything in the default folder, so local wasn't a path we were deploying. I finally realized that we possibly created a local folder ourselves in the GUI when someone went into the Manage Apps, view objects and edited the XML. I have modified it there manually to resolve the problem, but I want to delete the local view completely, and it didn't get removed when I uploaded a new release of the custom app to the Splunk Cloud.  Does anyone know how to delete a file from an app in Splunk Cloud?
I am attempting to ingest an XML file but am getting stuck can someone please help. The data will ingest if I remove "BREAK_ONLY_BEFORE =\<item\>"  but with a new event per item.   this is the XML ... See more...
I am attempting to ingest an XML file but am getting stuck can someone please help. The data will ingest if I remove "BREAK_ONLY_BEFORE =\<item\>"  but with a new event per item.   this is the XML and code I have tried   <?xml version="1.0" standalone="yes"?> <DocumentElement> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:08:21+11:00</lastscandate> <manufacturer>VMware, Inc.</manufacturer> <model>VMware7,1</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.11.200</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T12:20:21+11:00</lastscandate> <manufacturer>Hewlett-Packard</manufacturer> <model>HP Compaq Elite 8300 SFF</model> <operatingsystem>Microsoft Windows 8.1 Enterprise</operatingsystem> <ipaddress>168.132.136.160</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:54:28+11:00</lastscandate> <manufacturer>HP</manufacturer> <model>HP EliteBook 850 G5</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.219.32, 192.168.1.221</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item> <item> <hierarchy>ASA</hierarchy> <hostname>AComputer</hostname> <lastscandate>2023-12-17T11:50:20+11:00</lastscandate> <manufacturer>VMware, Inc.</manufacturer> <model>VMware7,1</model> <operatingsystem>Microsoft Windows 10 Enterprise</operatingsystem> <ipaddress>168.132.11.251</ipaddress> <vendor /> <lastloggedonuser>JohnSmith</lastloggedonuser> <totalcost>0.00</totalcost> </item>   Inputs.conf [monitor://D:\SplunkImportData\SNOW\*.xml] sourcetype=snow:all:devices index=asgmonitoring disabled = 0   Props.conf [snow:all:devices] KV_MODE=xml BREAK_ONLY_BEFORE =\<item\> SHOULD_LINEMERGE = false DATETIME_CONFIG = NONE
You need to look at how the input is processed and the definition of inputs.conf/props.conf for that linux sourcetype. As you can see in that event example, the time in the log message is 15:04:57, ... See more...
You need to look at how the input is processed and the definition of inputs.conf/props.conf for that linux sourcetype. As you can see in that event example, the time in the log message is 15:04:57, but the time in the Splunk event is 15:03:57, i.e. a minute earlier - so that date is not the one being used when Splunk is ingested. I am not familiar with SC4S, but has someone written a parser for that data - if so, it may be that the issue is at the SC4S parsing end.  
Actually, yes! I do have to add the fields pipe on the base search. I tried to add it on the chain also, but it does not work. And also, yes, I do have rename in the chain search while the spath i... See more...
Actually, yes! I do have to add the fields pipe on the base search. I tried to add it on the chain also, but it does not work. And also, yes, I do have rename in the chain search while the spath in the main search.  Interesting read on the forum you shared indeed. I'll be careful for now on parsing data between search. Ps. for name, in the end I have to mvzip name and status, mvjoin, find latest one then use rex to extract values out. It's complicated and time costly, but it works for now so I think I'm going to just let it be for now.
I assume that there is a typos in your MaxPeakTPS in the eventstats command and your use of peakTPS in the following stats and also the use of peakTime, which does not exist as a field. You can do t... See more...
I assume that there is a typos in your MaxPeakTPS in the eventstats command and your use of peakTPS in the following stats and also the use of peakTime, which does not exist as a field. You can do this | timechart span=1s count AS TPS ``` Calculate min and max TPS ``` | eventstats max(TPS) as max_TPS min(TPS) as min_TPS ``` Now work out average TPS, actual min and max TPS and then the first occurrence of the min/max TPS ``` | stats avg(TPS) as avgTPS values(*_TPS) as *_TPS min(eval(if(TPS=max_TPS, _time, null()))) as maxTime min(eval(if(TPS=min_TPS, _time, null()))) as minTime | fieldformat maxTime=strftime(maxTime,"%x %X") | fieldformat minTime=strftime(minTime,"%x %X") The min(eval... statements just look for the first _time when TPS is either min or max to get the earliest time when these occurred. Note the use of field naming conventions min_TPS/max_TPS that allows the use of wildcards in the stats.  
Every time I create a table visualization, I notice that the value 0 is always aligned on the left side while the rest is aligned on the right side. (322, 3483,0,0 is in the same column) Is ther... See more...
Every time I create a table visualization, I notice that the value 0 is always aligned on the left side while the rest is aligned on the right side. (322, 3483,0,0 is in the same column) Is there any reason behind it and any way to fix this? Thanks!  
You are still rounding UP with your floor(Score)+1 - is that what you intended?
You can give these evals a go. I would check and make sure you are getting everything properly as expected.  I don't have access to any sourcetype="mscs:nsg:flow" data at the moment so I just am u... See more...
You can give these evals a go. I would check and make sure you are getting everything properly as expected.  I don't have access to any sourcetype="mscs:nsg:flow" data at the moment so I just am using simulated data based off of your screenshots. If you are happy with the output then you could add them as calculated fields in local/props.conf (I would make sure that they don't step on any existing knowledge object in the app though) | eval time=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 0), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 0))), 'time') | eval src_ip=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 1), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 1))), 'src_ip') | eval dst_ip=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 2), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 2))), 'dst_ip') | eval src_port=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 3), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 3))), 'src_port') | eval dst_port=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 4), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 4))), 'dst_port') | eval protocol=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 5), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 5))), 'protocol') | eval traffic_flow=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 6), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 6))), 'traffic_flow') | eval traffic_result=if(isnotnull('records{}.properties.flows{}.flows{}.flowTuples{}'), case(mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')==1, mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 7), mvcount('records{}.properties.flows{}.flows{}.flowTuples{}')>1, mvmap('records{}.properties.flows{}.flows{}.flowTuples{}', mvindex(split('records{}.properties.flows{}.flows{}.flowTuples{}', ","), 7))), 'traffic_result')   Also, not sure if there are ever events formatted slightly differently because only a single flow occurred and it would no longer be an array in the json event, therefore changing the overall extracted field name to something like "records{}.properties.flows{}.flows.flowTuples{}". From the look at the microsoft_azure app configs, it looks like its only every referencing "records{}.properties.flows{}.flows{}.flowTuples{}" for it's extractions so I just made the assumption that events will be formatted this way.
Hi, We are ingesting Azure NSG flow logs and visualizing them using app Microsoft Azure App for Splunk https://splunkbase.splunk.com/app/4882 Data is in JSON format with multiple levels/records in ... See more...
Hi, We are ingesting Azure NSG flow logs and visualizing them using app Microsoft Azure App for Splunk https://splunkbase.splunk.com/app/4882 Data is in JSON format with multiple levels/records in a single event. Each record can have multiple flows, flow tuples etc. Adding few screenshots here to give the context. Default extractions for the main JSON fields look fine. But when it comes to values within the flow tuple field, i.e. records{}.properties.flows{}.flows{}.flowTuples{}, Splunk only keeps values from the very first entry. How can I make these src_ip, dest_ip fields also get multiple values(across all records/flow tuples etc)   Splunk extracts values only from that first highlighted entry Here is the extraction logic from this app.    [extract_tuple] SOURCE_KEY = records{}.properties.flows{}.flows{}.flowTuples{} DELIMS = "," FIELDS = time,src_ip,dst_ip,src_port,dst_port,protocol,traffic_flow,traffic_result     Thanks,
@VatsalJagani  I'm trying to work on setting up auto import for CSV file into a metric index. How to schedule this process? Any insights or examples would be greatly appreciated.
We use the free version of syslog-ng, and recently we had a requirement to have TLS on top of TCP, and we don't have the knowledge to implement it. Therefore we wonder whether anybody has migrated fr... See more...
We use the free version of syslog-ng, and recently we had a requirement to have TLS on top of TCP, and we don't have the knowledge to implement it. Therefore we wonder whether anybody has migrated from syslog-ng to SC4S to assist with more advanced requirements such as TCP/TLS? https://splunkbase.splunk.com/app/4740 If so, what lessons learned and motivations to do that?
One hint for home directories. I never create home directory as /opt/splunk or similar for any service user unless that’s mandatory by service/program. It’s much easier to manage that service when yo... See more...
One hint for home directories. I never create home directory as /opt/splunk or similar for any service user unless that’s mandatory by service/program. It’s much easier to manage that service when you could keep your own stuff outside of distribution directories. Also it’s easy to take temporary conf backups etc under home. 
Where is the data from the Splunk Enterprise Security (ES) Investigation Panel stored? In the previous version, it seemed to be stored in a KV lookup, but I can't find it in the current 7.x version.... See more...
Where is the data from the Splunk Enterprise Security (ES) Investigation Panel stored? In the previous version, it seemed to be stored in a KV lookup, but I can't find it in the current 7.x version. I understand that the Notable index holds information related to incidents from the Incident Review Dashboard. How can we map Splunk Notables and their Investigations together to generate a comprehensive report in the current 7.x ES version?
I'm also glad I found this thread, only wish it was sooner.  I just ran into this today when I saw a forwarder on RHEL 7 update from splunkforwarder-9.0.2-17e00c557dc1.x86_64 to splunkforwarder-9.1.2... See more...
I'm also glad I found this thread, only wish it was sooner.  I just ran into this today when I saw a forwarder on RHEL 7 update from splunkforwarder-9.0.2-17e00c557dc1.x86_64 to splunkforwarder-9.1.2-b6b9c8185839.x86_64. In my environment and with our Ansible automation we pre-create the splunk user and group so that we maintain the same userid and groupid across all system and don't conflict with users in our LDAP directory.  It has always been a problem that both the forwarder and enterprise used the same name since they expect different home directories (i.e. /opt/splunk vs. /opt/splunkforwarder) which means trying to centralize a splunk user doesn't work well.  I like the idea of having separate users/groups but this was a surprise change to me and not sure what I am left with at the moment other than a currently broken install while I figure out what the implication are of the 1400+ messages for  "warning: user splunkfwd does not exist - using root" and "warning: group splunkfwd does not exist - using root" are.  Presumably I can just add the splunkfwd user and group and then change ownership and run some invocation of splunk enable boot-start -systemd-managed 1 -user splunkfwd -group splunkfwd to setup the systemd unit file but not something I had planned on doing.  Also not sure if I need to now change the user defined in /opt/splunkforwarder/etc/splunk-launch.conf.
Tried this out and came back with this. Format might be a little different than what you asked for but I think tells the same story. | bucket span=1m _time | stats count as TPS ... See more...
Tried this out and came back with this. Format might be a little different than what you asked for but I think tells the same story. | bucket span=1m _time | stats count as TPS by _time | eventstats min(TPS) as min_TPS, max(TPS) as max_TPS | foreach *_TPS [ | eval <<MATCHSTR>>_TPS_epoch=if( 'TPS'=='<<MATCHSTR>>_TPS', mvappend( '<<MATCHSTR>>_TPS_epoch', '_time' ), '<<MATCHSTR>>_TPS_epoch' ) ] | stats avg(TPS) as avg_TPS, first(*_TPS) as *_TPS, first(*_TPS_epoch) as *_TPS_epoch | eval avg_TPS=round('avg_TPS', 2) | foreach *_TPS_epoch [ | eval <<MATCHSTR>>_TPS_timestamps=case( mvcount('<<FIELD>>')==1, strftime('<<FIELD>>', "%x %X"), mvcount('<<FIELD>>')>1, mvmap('<<FIELD>>', strftime('<<FIELD>>', "%x %X")) ), <<MATCHSTR>>_TPS_json=json_object( "type", "<<MATCHSTR>>", "TPS", '<<MATCHSTR>>_TPS', "Timestamps", '<<MATCHSTR>>_TPS_timestamps' ), combined_TPS_json=mvappend( 'combined_TPS_json', '<<MATCHSTR>>_TPS_json' ) ] | fields + combined_TPS_json, avg_TPS | addinfo | eval search_time_window_end=strftime(info_max_time, "%x %X"), search_time_window_start=strftime(info_min_time, "%x %X"), avg_TPS_time_window='search_time_window_start'." --> ".'search_time_window_end' | eval combined_TPS_json=mvappend( 'combined_TPS_json', json_object( "type", "avg", "TPS", 'avg_TPS', "Timestamps", 'avg_TPS_time_window' ) ) | mvexpand combined_TPS_json | fromjson combined_TPS_json | fields - combined_TPS_json | fields + type, TPS, Timestamps   Output should look something like this.  You should also be able to change the time bucket span form 1m back to 1s since that is how it was setup in your initial query.