All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's a good reduction with the stats values, I assume the simple host count is significantly less than the 1.9m rows, so although you will have the same number of qids per host, the lookup command ... See more...
That's a good reduction with the stats values, I assume the simple host count is significantly less than the 1.9m rows, so although you will have the same number of qids per host, the lookup command count will be reduced - it would be interesting to compare the job inspector details between the two searches. As for KV store replication - a KV store on the search head is not a KV store on the indexer, instead a CSV is transferred to the indexers, so any accelerations in the KV are lost and you are simply using CSV lookups on the indexer. I am not sure how a 250MB CSV on the indexer will be handled - if it works the same way as on the SH, it exceeds the  max_memtable_bytes value discussed in limits.conf https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Limitsconf#.5Blookup.5D so I imagine it will then be "indexed" (as in a file system index) on disk. If you are not running in Splunk Cloud you may want to try a local CSV lookup and play around with the limits.conf settings for your app. If you have the time, you might also want to experiment with using the eval lookup() statement and use a split CSV. For example, you could split the CSV into 10 * 25MB files to stay under the existing threshold and partition the QIDs into each lookup. Then you could do some weird SPL like  | eval p=partition_logic(QID) | eval output_json=case( p=1, lookup("qid_partition_1.csv", json_object("QID", QID), json_array("field1", "field2"), p=2, lookup("qid_partition_2.csv", json_object("QID", QID), json_array("field1", "field2"), p=3, lookup("qid_partition_3.csv", json_object("QID", QID), json_array("field1", "field2"), p=4, lookup("qid_partition_4.csv", json_object("QID", QID), json_array("field1", "field2"), p=5, lookup("qid_partition_5.csv", json_object("QID", QID), json_array("field1", "field2"), p=6, lookup("qid_partition_6.csv", json_object("QID", QID), json_array("field1", "field2"), p=7, lookup("qid_partition_7.csv", json_object("QID", QID), json_array("field1", "field2"), p=8, lookup("qid_partition_8.csv", json_object("QID", QID), json_array("field1", "field2"), p=9, lookup("qid_partition_9.csv", json_object("QID", QID), json_array("field1", "field2"), p=10, lookup("qid_partition_10.csv", json_object("QID", QID), json_array("field1", "field2")) technically this could work, but whether you would see any improvements, I have no idea. Was that 50 seconds using local lookup or standard lookup. If it was normal (remote) then try using local so it does use the local KV store 
Hi @marnall  Your suggestion worked fine.   I accepted this as a solution. Thank you so much for your help. It looks like the reason it didn't work earlier because I assigned  eval _time = info_... See more...
Hi @marnall  Your suggestion worked fine.   I accepted this as a solution. Thank you so much for your help. It looks like the reason it didn't work earlier because I assigned  eval _time = info_max_time, but I didn't put "addinfo", so it went to default value which is info_min_time. Can you test on your end if _time set to info_min_time, if you don't use/remove the following eval? Thanks | eval _time = now() + 3600  
Is there a way to use only Upper bound to define Outliers? I wish to only define Outliers using the Upper Bound. (I want to define striking up outliers)
Okay will try. Thanks.
You can sort of do that.  But why?  This gets more convoluted that your problem warrants.  Your OP says you are doing selector in dashboard logic.  As @bowesmana said, that's precisely what multi-sel... See more...
You can sort of do that.  But why?  This gets more convoluted that your problem warrants.  Your OP says you are doing selector in dashboard logic.  As @bowesmana said, that's precisely what multi-selector token is for. But if you really need a CSV file to do so, name the column "NAME" instead of NAME_LIST.  Then, split the value.   | search [inputlookup csv.csv | eval NAME = split(NAME, ",")]   It doesn't really do an IN operation but is semantically equivalent. Here's an emulation   | makeresults format=csv data="NAME task2 task4" | search [inputlookup csv.csv | eval NAME = split(NAME, ",")]   Your sample CSV row will give you NAME task2
Didn't think that far ahead but Ill be making a dashboard for this so I think it would be easier if I separated them instead of trying to put it all in one search. What you just did works perfectly s... See more...
Didn't think that far ahead but Ill be making a dashboard for this so I think it would be easier if I separated them instead of trying to put it all in one search. What you just did works perfectly so thank you so much for that! 
How do you envision Team and User aligned if they are both arrays?  Your illustrated results suggests that you don't care about this part.  If so, would this do? | stats count values(Team) as Team v... See more...
How do you envision Team and User aligned if they are both arrays?  Your illustrated results suggests that you don't care about this part.  If so, would this do? | stats count values(Team) as Team values(User) as User by URL
Thanks for the reply! My team ended up having an OnDemandService request to look into this. Will report back. To answer your question by question. The irony is that it was originally CSV that we ha... See more...
Thanks for the reply! My team ended up having an OnDemandService request to look into this. Will report back. To answer your question by question. The irony is that it was originally CSV that we have since converted to kvStore for performance. The CSV file had became big, like 250 MB if I recall. This is the Qualys Knowledge Base we are talking about that Qualys provide out of box so it is not something we can trim down to size as CSV. The row count after the first stats and before lookup is 1,926,000. The fact stats calculates this in 20 seconds is perfectly fine. The problem becomes when lookup is used, it puts on an extra 140 seconds or so according to the Job Inspector. The dc(HOST_ID) ultimately ends with 7,800 rows. Now for your suggested approach - good catch for the stats. That works great for this particular query that only cares about PATCHABLE. In fact, the last stats can be changed from dc() to just count. Much faster at 50 seconds now! At some point however, we will need to provide full vulnerability data pulled from Qualys Knowledge Base through Splunk as the means of reporting for the engineering teams. We will run into this problem of the lookup hanging up for at least 140 seconds again. Regarding poor performance, when you say 'replicate' - is this what you mean for collections.conf? Because the kvstore is already replicated. [qualys_kb_kvstore] accelerated_fields.QID_accel = {"QID": 1} replicate = true  I think ultimately, we were under the impression the kvStore replication to the indexers makes it so there's a local copy handy for them, making a lookup really fast in matter of seconds. Maybe we had the wrong expectation?
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specifi... See more...
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specific URLs and put them on a single line with the corresponding URL. Team1                User1                 URL1                    Count Team2                User4 Team3                User9 ------------------------------------------------------------------------ Team1                User3                 URL2                    Count Team4                User4                               User12                               User16                               User17 ------------------------------------------------------------------------ Team3                User1                 URL3                    Count Team6                User3 Team10              User12 ------------------------------------------------------------------------ Let me know if I need to clarify anything
If your JSON-compliant data contains two arrays that has to be mapped externally, your developers have committed the highest design crime.  If you have any influence over development team, beg them, ... See more...
If your JSON-compliant data contains two arrays that has to be mapped externally, your developers have committed the highest design crime.  If you have any influence over development team, beg them, implore them, curse them to change custom_attributes to something like   {"root-entity-id":"3","campaign-id":"XXXX","campaign-name":"XXXXX","marketing-area":"CCCC","record_count":"","country":"","id_array":[{"internal":"12345678","lead":"000000"},{"internal":"9876543","lead":"1111111"},{"internal":"2341234","lead":"3333333"}]}   This way, data processing (in any language, not just Splunk) will be much cleaner.  More importantly, downstream programmers such as yourself will not need to have this vertical knowledge about implied semantics. No implied semantics is one of the most important advantages for people to adopt structured data formats such as JSON.  This means lower maintenance cost in the future.
Is the data the same data or different? What is the search in each case. Take a look at the job inspector and job properties https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication... See more...
Is the data the same data or different? What is the search in each case. Take a look at the job inspector and job properties https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html Have a look at the phase0 job property in each case and also look at the LISPY in the search.log  
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up th... See more...
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up the correct terminology. FlexVPN runs on IOS XE. I have logging configured the same as far as using logging trap informational (default) and noticed that we seem to not be getting a lot of data relating to the specifics with the tunnels, negotiations, etc., from a raw syslog perspective. What we would like to be able to do is monitor the tunnels so whenever a tunnel is brought up, taken down, or source (connection) IPs change. Possibly other things we haven't though of yet, hoping to encounter someone else who has used the same technologies and has something already built out. Thank you in advance.
Have you tried making the Qualys CSV a CSV rather than KV and tried using that? Does it exceed the CSV size threshold? What's the row count after the stats because you're only doing the lookup on th... See more...
Have you tried making the Qualys CSV a CSV rather than KV and tried using that? Does it exceed the CSV size threshold? What's the row count after the stats because you're only doing the lookup on the aggregated host count Have you tried this approach instead? | stats count values(QID) by HOST_ID | lookup qualys_kb_kvstore QID AS QID OUTPUTNEW PATCHABLE | search PATCHABLE="YES" | stats dc(HOST_ID) ```Number of patchable hosts!``` which will reduce the stats host count to one row per host and then do an MV lookup - all you care about it PATCHABLE = "yes" in any of the returned results, so then changing where to search will find any MV value of YES I do recall having some poor performing KV searches some time ago, but we ended up moving away from KV store anyway, because most of our lookups were needed to be done on the indexers, so unless you replicate, the data is returned to the SH and when KV is replicated, it ends up as CSV on the indexer anyway.
I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist ... See more...
I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist = 4624, 4634 ``` In my Splunk server I set up forwarding to a 3rd party. outputs.conf ``` [tcpout] defaultGroup = nothing [tcpout:foobar] server = 10.2.84.209:9997 sendCookedData = false [tcpout-server://10.2.84.209:9997] ``` props.conf ``` [XmlWinEventLog:Security] TRANSFORMS-Xml=foo ``` Transforms.conf ``` [foo] REGEX = . DEST_KEY=_TCP_ROUTING FORMAT=foobar ``` Before creating/editing these conf files I am still seeing lots of non- Windows events being sent to the destination. With these confs in place I am not seeing any events being forwarded. What's the easiest fix to my conf files so that I only send XMLs to the 3rd party system? Thanks, Billy EDIT: What markup does this forum use? single/triple backticks dont work, nor is <pre></pre>
You can't do exactly that with a 2 line header, but depending on your SPL, yes, it's possible. I'm guessing you have those resuts from a chart or stats command. The columns are 'sorted', so all you n... See more...
You can't do exactly that with a 2 line header, but depending on your SPL, yes, it's possible. I'm guessing you have those resuts from a chart or stats command. The columns are 'sorted', so all you need to do is make your column names Start-ApplicationX and Stop-ApplicationX  so the starts come before the stops or you can just take your existing table and use the table command to do | table *Start *Stop
True enough - it's fiddly and requires post processing of the JSON output, but it's one of the rare conditional if/execute pieces of powerful logic in SPL
Hello,   I would like to know if there is any way to integrate Github cloud to Splunk cloud and from splunk how these logs can be forwarded to Rapid 7 SIEM??
How are you reading the values from the lookup table - you didn't say if this was a multiselect dropdown input? No you cannot do what you suggest here. "parameters" generally mean tokens and multise... See more...
How are you reading the values from the lookup table - you didn't say if this was a multiselect dropdown input? No you cannot do what you suggest here. "parameters" generally mean tokens and multiselect specifically support this type of case.  
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingest... See more...
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingested but not others ..for all others in internal logs I see "was already indexed as a non-archive, skipping" Log path /ad/logs/abc/syslog/syslog.log.24.gz Internal logs : 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Archive with path="/ad/logs/abc/syslog/syslog.log.2.gz" was already indexed as a non-archive, skipping. 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Finished processing file '/ad/logs/abc/syslog/syslog.log.2.gz', removing from stats> Should I try crcsalt or crclength ?
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 ... See more...
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 8 10 4 3 2 14 4 12 5 16 12 But i want to see the output as shown below is that possible??? Start Start Start Stop Stop Stop Application1 Application2 Application3 Application1 Application2 Application3 10 12 70 4 7 30 12 10 3 8 4 2 14 12 16 4 5 12