All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How do you envision Team and User aligned if they are both arrays?  Your illustrated results suggests that you don't care about this part.  If so, would this do? | stats count values(Team) as Team v... See more...
How do you envision Team and User aligned if they are both arrays?  Your illustrated results suggests that you don't care about this part.  If so, would this do? | stats count values(Team) as Team values(User) as User by URL
Thanks for the reply! My team ended up having an OnDemandService request to look into this. Will report back. To answer your question by question. The irony is that it was originally CSV that we ha... See more...
Thanks for the reply! My team ended up having an OnDemandService request to look into this. Will report back. To answer your question by question. The irony is that it was originally CSV that we have since converted to kvStore for performance. The CSV file had became big, like 250 MB if I recall. This is the Qualys Knowledge Base we are talking about that Qualys provide out of box so it is not something we can trim down to size as CSV. The row count after the first stats and before lookup is 1,926,000. The fact stats calculates this in 20 seconds is perfectly fine. The problem becomes when lookup is used, it puts on an extra 140 seconds or so according to the Job Inspector. The dc(HOST_ID) ultimately ends with 7,800 rows. Now for your suggested approach - good catch for the stats. That works great for this particular query that only cares about PATCHABLE. In fact, the last stats can be changed from dc() to just count. Much faster at 50 seconds now! At some point however, we will need to provide full vulnerability data pulled from Qualys Knowledge Base through Splunk as the means of reporting for the engineering teams. We will run into this problem of the lookup hanging up for at least 140 seconds again. Regarding poor performance, when you say 'replicate' - is this what you mean for collections.conf? Because the kvstore is already replicated. [qualys_kb_kvstore] accelerated_fields.QID_accel = {"QID": 1} replicate = true  I think ultimately, we were under the impression the kvStore replication to the indexers makes it so there's a local copy handy for them, making a lookup really fast in matter of seconds. Maybe we had the wrong expectation?
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specifi... See more...
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specific URLs and put them on a single line with the corresponding URL. Team1                User1                 URL1                    Count Team2                User4 Team3                User9 ------------------------------------------------------------------------ Team1                User3                 URL2                    Count Team4                User4                               User12                               User16                               User17 ------------------------------------------------------------------------ Team3                User1                 URL3                    Count Team6                User3 Team10              User12 ------------------------------------------------------------------------ Let me know if I need to clarify anything
If your JSON-compliant data contains two arrays that has to be mapped externally, your developers have committed the highest design crime.  If you have any influence over development team, beg them, ... See more...
If your JSON-compliant data contains two arrays that has to be mapped externally, your developers have committed the highest design crime.  If you have any influence over development team, beg them, implore them, curse them to change custom_attributes to something like   {"root-entity-id":"3","campaign-id":"XXXX","campaign-name":"XXXXX","marketing-area":"CCCC","record_count":"","country":"","id_array":[{"internal":"12345678","lead":"000000"},{"internal":"9876543","lead":"1111111"},{"internal":"2341234","lead":"3333333"}]}   This way, data processing (in any language, not just Splunk) will be much cleaner.  More importantly, downstream programmers such as yourself will not need to have this vertical knowledge about implied semantics. No implied semantics is one of the most important advantages for people to adopt structured data formats such as JSON.  This means lower maintenance cost in the future.
Is the data the same data or different? What is the search in each case. Take a look at the job inspector and job properties https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication... See more...
Is the data the same data or different? What is the search in each case. Take a look at the job inspector and job properties https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html Have a look at the phase0 job property in each case and also look at the LISPY in the search.log  
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up th... See more...
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up the correct terminology. FlexVPN runs on IOS XE. I have logging configured the same as far as using logging trap informational (default) and noticed that we seem to not be getting a lot of data relating to the specifics with the tunnels, negotiations, etc., from a raw syslog perspective. What we would like to be able to do is monitor the tunnels so whenever a tunnel is brought up, taken down, or source (connection) IPs change. Possibly other things we haven't though of yet, hoping to encounter someone else who has used the same technologies and has something already built out. Thank you in advance.
Have you tried making the Qualys CSV a CSV rather than KV and tried using that? Does it exceed the CSV size threshold? What's the row count after the stats because you're only doing the lookup on th... See more...
Have you tried making the Qualys CSV a CSV rather than KV and tried using that? Does it exceed the CSV size threshold? What's the row count after the stats because you're only doing the lookup on the aggregated host count Have you tried this approach instead? | stats count values(QID) by HOST_ID | lookup qualys_kb_kvstore QID AS QID OUTPUTNEW PATCHABLE | search PATCHABLE="YES" | stats dc(HOST_ID) ```Number of patchable hosts!``` which will reduce the stats host count to one row per host and then do an MV lookup - all you care about it PATCHABLE = "yes" in any of the returned results, so then changing where to search will find any MV value of YES I do recall having some poor performing KV searches some time ago, but we ended up moving away from KV store anyway, because most of our lookups were needed to be done on the indexers, so unless you replicate, the data is returned to the SH and when KV is replicated, it ends up as CSV on the indexer anyway.
I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist ... See more...
I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist = 4624, 4634 ``` In my Splunk server I set up forwarding to a 3rd party. outputs.conf ``` [tcpout] defaultGroup = nothing [tcpout:foobar] server = 10.2.84.209:9997 sendCookedData = false [tcpout-server://10.2.84.209:9997] ``` props.conf ``` [XmlWinEventLog:Security] TRANSFORMS-Xml=foo ``` Transforms.conf ``` [foo] REGEX = . DEST_KEY=_TCP_ROUTING FORMAT=foobar ``` Before creating/editing these conf files I am still seeing lots of non- Windows events being sent to the destination. With these confs in place I am not seeing any events being forwarded. What's the easiest fix to my conf files so that I only send XMLs to the 3rd party system? Thanks, Billy EDIT: What markup does this forum use? single/triple backticks dont work, nor is <pre></pre>
You can't do exactly that with a 2 line header, but depending on your SPL, yes, it's possible. I'm guessing you have those resuts from a chart or stats command. The columns are 'sorted', so all you n... See more...
You can't do exactly that with a 2 line header, but depending on your SPL, yes, it's possible. I'm guessing you have those resuts from a chart or stats command. The columns are 'sorted', so all you need to do is make your column names Start-ApplicationX and Stop-ApplicationX  so the starts come before the stops or you can just take your existing table and use the table command to do | table *Start *Stop
True enough - it's fiddly and requires post processing of the JSON output, but it's one of the rare conditional if/execute pieces of powerful logic in SPL
Hello,   I would like to know if there is any way to integrate Github cloud to Splunk cloud and from splunk how these logs can be forwarded to Rapid 7 SIEM??
How are you reading the values from the lookup table - you didn't say if this was a multiselect dropdown input? No you cannot do what you suggest here. "parameters" generally mean tokens and multise... See more...
How are you reading the values from the lookup table - you didn't say if this was a multiselect dropdown input? No you cannot do what you suggest here. "parameters" generally mean tokens and multiselect specifically support this type of case.  
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingest... See more...
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingested but not others ..for all others in internal logs I see "was already indexed as a non-archive, skipping" Log path /ad/logs/abc/syslog/syslog.log.24.gz Internal logs : 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Archive with path="/ad/logs/abc/syslog/syslog.log.2.gz" was already indexed as a non-archive, skipping. 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Finished processing file '/ad/logs/abc/syslog/syslog.log.2.gz', removing from stats> Should I try crcsalt or crclength ?
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 ... See more...
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 8 10 4 3 2 14 4 12 5 16 12 But i want to see the output as shown below is that possible??? Start Start Start Stop Stop Stop Application1 Application2 Application3 Application1 Application2 Application3 10 12 70 4 7 30 12 10 3 8 4 2 14 12 16 4 5 12
Assuming these are numeric (not strings), you can use streamstats | streamstats window=2 range(USAGE) as difference
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = vol... See more...
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = volume:cold/cool_index/colddb thawedPath = $SPLUNK_DB/cool_index/thaweddb frozenTimePeriodInSecs = 10368000 #120 day retention maxTotalDataSizeMB = 60000 maxDataSize=auto repFactor=auto      am I missing something?
Where are you applying the Event Hubs Data Receiver role?  I usually apply it at the Subscription level so that any other namespaces created in the same subscription will inherit the necessary permis... See more...
Where are you applying the Event Hubs Data Receiver role?  I usually apply it at the Subscription level so that any other namespaces created in the same subscription will inherit the necessary permissions.  There is a walkthrough here (Step 4) => https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data The SSL error you are getting may be a private certificate in the certificate chain.  I have also seen similar issues when a network device injects a private cert in the header in outbound traffic.
Actually, it may be that something is wrong with your CIM Validator. Even if I try to search a non-existent index, it still populates the counters at the top and has rows of "no extracted values foun... See more...
Actually, it may be that something is wrong with your CIM Validator. Even if I try to search a non-existent index, it still populates the counters at the top and has rows of "no extracted values found" Which version of Cim Validator are you using? Perhaps you could try backing up the current cim validator app, then re-installing it.
Sure thing. For testing I am using this SPL: (time range set to "Last 30 Days")     index=_internal | table _time sourcetype | head 5 | eval othertestfield="test1" | eval _time = now() + 3600 | c... See more...
Sure thing. For testing I am using this SPL: (time range set to "Last 30 Days")     index=_internal | table _time sourcetype | head 5 | eval othertestfield="test1" | eval _time = now() + 3600 | collect index=summary testmode=true addtime=true     It produces the following output: _time sourcetype _raw othertestfield 2024-03-12T22:50:05.000+01:00 splunkd 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1   I ran the search at 21:50 CET, and the _time field shows the current time plus 3600 seconds.
Per the docs, it must be in sytem/local https://docs.splunk.com/Documentation/Splunk/latest/Admin/Web-featuresconf # To use one or more of these configurations, copy the configuration block into # ... See more...
Per the docs, it must be in sytem/local https://docs.splunk.com/Documentation/Splunk/latest/Admin/Web-featuresconf # To use one or more of these configurations, copy the configuration block into # the web-features.conf file located in $SPLUNK_HOME/etc/system/local/. You must restart # Splunk software after you make changes to this setting to enable configurations. BTW, Btool isn't always the best way to check settings as it just reads the OS files and parses the data there, the configuration files seen by btool may or may not be valid.