All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm sorry I didn't see your reply sooner, thank you so much! You're a hero!!
Hello,   In short, I have to transmute a file, and I leverage the /vault/tmp/ directory.   I'm able to do what I want, but I'm wondering if I have to 'clean up' this /vault/tmp/ directory.   ex... See more...
Hello,   In short, I have to transmute a file, and I leverage the /vault/tmp/ directory.   I'm able to do what I want, but I'm wondering if I have to 'clean up' this /vault/tmp/ directory.   ex - I have a file I want to XOR bit by bit. I read unxord.exe bit by bit, write to /vault/tmp/xord.exe, then I do a phantom.vault_add(file_location="/vault/tmp/xord.exe"). This works fine.   Do I have to do any removal of the "/vault/tmp/xord.exe"?   I've tried to do something like: import os os.remove("/vault/tmp/xord.exe")   However, I get a path not found error.    So, how often does Phantom SOAR clean up the /vault/tmp/ directory, and can/should I remove the temp file myself?   Thanks!
Hello Team,   I have the specific data: 1-10:20:30 (This stands for 1 day 10 hours 20 minutes 30 seconds) 10:20:30 (This stands for 10 hours 20 minutes 30 seconds) 20:30 (This stands for 10 hour... See more...
Hello Team,   I have the specific data: 1-10:20:30 (This stands for 1 day 10 hours 20 minutes 30 seconds) 10:20:30 (This stands for 10 hours 20 minutes 30 seconds) 20:30 (This stands for 10 hours 20 minutes 30 seconds) I would like to create a table where I get the output in seconds ELAPSED |  seconds_elapsed 1-10:20:30  |  123630 10:20:30  |  37230 20:30  |  1230   Below is the rex format i am using to get this data. But for the last entry (20:30), i am getting an empty output. Is there a way to fix this? | rex field=ELAPSED "((?<dd>\d+)\-?)((?<hh>\d+)\:?)((?<mm>\d+)\:)?(?<ss>\d+)$" | eval elapsed_secs=(dd * 86400) + (hh * 3600) + (mm * 60) + (ss*1) | table ELAPSED seconds_elapsed _time  
Starter content for the ask-question category for the upcoming app. CISOs currently have difficulty demonstrating the value of the organization’s security program to other executives as well as bo... See more...
Starter content for the ask-question category for the upcoming app. CISOs currently have difficulty demonstrating the value of the organization’s security program to other executives as well as board members.   Additionally, meaningful performance indicators do not exist to inform executive leadership and the board of the resilience of the organization’s security posture nor do indicators exist to provide focus on the "day-to-day" tasks that are a priority. Does Splunk have Security Metrics App covering Network, Endpoint, and Vulnerability focused dashboards? Mean time to Detect Mean time to Investigate Mean time to Respond Trend of the Mean time to detect/respond/investigate % of incidents detected within the organizations SLA's % of incidents investigated within the organizations SLA's % of incidents respond within the organizations SLA's Trend of # 5,6, and 7 ES Notables last 24 hrs Risk notables last 24 hrs Trend of 9 and 10 ES notables abandoned last 24 hours Risk notables abandoned last 24 hrs Trend of 12 and 13 #splunk_security_metrics
What I am saying is that @gcusello's solution is the most generic because day, hour, minute can be algorithmically determined with no ambiguity.  These are time units, whereas year and month are cale... See more...
What I am saying is that @gcusello's solution is the most generic because day, hour, minute can be algorithmically determined with no ambiguity.  These are time units, whereas year and month are calendar concepts.  You cannot have month as duration because there is no meaningful definition of a month. (Week, on the other hand, can be calculated.)  Using year is not very accurate.  Yes, you can use 365.25 but that is still an approximation.  If you insist on using year (but forego month), you can continue to use your formula, or this simplified one   | eval duration=strptime(till,"%m/%d/%Y %I:%M %p")-strptime(from,"%m/%d/%Y %I:%M %p") | eval years = round(duration / 86400 / 365.25) | eval duration = years . " year(s) " . tostring(duration - years * 365.25 * 86400, "duration")   The only way to define a "month" with time interval (duration) is if you choose a reference calendar.  For example, use the Unix epoch zero calendar, namely 1970.   | eval duration=strptime(till,"%m/%d/%Y %I:%M %p")-strptime(from,"%m/%d/%Y %I:%M %p") | eval years = tonumber(strftime(duration, "%Y")) - 1970 | eval months = tonumber(strftime(duration, "%m")) - 1 | eval ymdt = years . " year(s) " . months . " month(s) " . strftime(duration, "%d") . " day(s) " . strftime(duration, "%T")   Of course, you can also change reference calendar to a different year.  But honestly I cannot see any useful application of this format. Here is an emulation:   | makeresults format=csv data="from, till 11/28/2023 03:38 PM, 11/28/2024 04:08 PM" ``` data emulation above ```   It will give the following duration from months till years ymdt 31624200.000000 11/28/2023 03:38 PM 0 11/28/2024 04:08 PM 1 1 year(s) 0 month(s) 01 day(s) 16:30:00
You would not be the first person to conflate the inputlookup and lookup commands.  This is a classic use case for lookup.  Insert the lookup command late in the query to pull the reason from the CSV... See more...
You would not be the first person to conflate the inputlookup and lookup commands.  This is a classic use case for lookup.  Insert the lookup command late in the query to pull the reason from the CSV. index=vulnerability severity=critical | eval first_found=replace (first_found, "T\S+", "") | eval first_found_epoch=strptime(first_found, "%Y-%m-%d") | eval last_found=replace (last_found, "T\S+", "") | eval last_found_epoch=strptime(last_found, "%Y-%m-%d") | eval last_found_65_days=relative_time(last_found_epoch,"-65d@d") | fieldformat last_found_65_days_convert=strftime(last_found_65_days, "%Y-%m-%d") | where first_found_epoch>last_found_65_days | sort -first_found | dedup cve | lookup mylookup.csv ScanHost as asset_fqdn target-CVE as cve OUTPUT Reason | rename severity AS Severity, first_found AS "First Found", last_found AS "Last Found", asset_fqdn AS Host, ipv4 AS IP, cve AS CVE, output AS Description | streamstats count as "Row #" | table Severity,"First Found","Last Found",Host,IP,CVE,Description,Reason Pro tip: do everything you can to avoid using hyphens in field names.  Splunk sometimes interprets it as a minus operator, which can break a query.
Hi this depends on your use case and where and how you have installed Splunk. In on premise in most cases block storage is the best/only suitable solution. You haven’t enough good (fast+bandwith) st... See more...
Hi this depends on your use case and where and how you have installed Splunk. In on premise in most cases block storage is the best/only suitable solution. You haven’t enough good (fast+bandwith) storage network + storage for object storage. But in cloud e.g. AWS you could use object storage (S3) if/when you could build your environment with enough big local cache (NVMe). Basically most of your requests must hit into cache and then it works, but if too many requests goes to S3 then you suffer bad response time and user experience. r. Ismo
Hi @Rinku.singh, Thanks for letting me know. Let's see if the Community can jump in and help. If you happen to find the answer yourself, please do share it here as a reply.
Hi @Shriraam.M, Do you still need help or was @Terence.Chen insight helpful?
Hi it’s like @richgalloway said, but I think that you could try to write your own “backend” and use scripted authentication method in splunk? But I’m quite sure that this is not worth of needed work... See more...
Hi it’s like @richgalloway said, but I think that you could try to write your own “backend” and use scripted authentication method in splunk? But I’m quite sure that this is not worth of needed work for create and update that backend? r. Ismo
The field name is log.stdout. | spath input=log.stdout See my earlier comment https://community.splunk.com/t5/Splunk-Search/How-to-use-spath-with-string-formatted-events/m-p/670776/highlight/true#M... See more...
The field name is log.stdout. | spath input=log.stdout See my earlier comment https://community.splunk.com/t5/Splunk-Search/How-to-use-spath-with-string-formatted-events/m-p/670776/highlight/true#M229914 
Hi @QuantumRgw , this should be a different issue: it seems that there's a delay in search executions. This could be caused by insufficient hardware resources or too many scheduled searches. Wait ... See more...
Hi @QuantumRgw , this should be a different issue: it seems that there's a delay in search executions. This could be caused by insufficient hardware resources or too many scheduled searches. Wait for a while and see if this issue continue to be present. Then analyze the load of your system using the Monitoring Console. Ciao. Giuseppe
How do I grab all of the versions of Splunk EXCEPT the top 1, basically the opposite of index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" | top limit=1 Version | table... See more...
How do I grab all of the versions of Splunk EXCEPT the top 1, basically the opposite of index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" | top limit=1 Version | table Version It would be nice if there was a top limit=-1 component.   Or, How do I negate a subsearch? index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" [search index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" | top limit=1 Version | table Version] | dedup host, Version | table host Name Version I want to search for all computers with other versions of Splunk
Hi @Mr_Adate , it isn't a good idea to add a new post, even if on the same topic, to an existing question (especially when closed!) because it's more difficoult to have an answer, so, next time open... See more...
Hi @Mr_Adate , it isn't a good idea to add a new post, even if on the same topic, to an existing question (especially when closed!) because it's more difficoult to have an answer, so, next time open a new question. Anyway, what do you mean with "compare": do you want to filter the rows of one lookup with the ones of the second? or do you want to know if a value is present in both the lookups or in one of them? if the first, please try something like this: | inputlookup Firewall_NEW_Database.csv WHERE [ | inputlookup Firewall_OLD_Database.csv | rename Firewall AS Hostname | fields Hostname ] | table Hostname Location Datacenter if the second, please try: | inputlookup Firewall_NEW_Database.csv | eval lookup="Firewall_NEW_Database.csv" | append [ | inputlookup Firewall_OLD_Database.csv | rename Firewall AS Hostname | eval lookup="Firewall_OLD_Database.csv" | fields Hostname Location Datacenter lookup ] | stats values(Location) AS Location values(Datacenter) AS Datacenter dc(lookup) AS lookup_count values(lookup) AS lookup BY Hostname | eval lookup=if(lookup_count=2, "Both lookups",lookup) | table Hostname Location Datacenter lookup Ciao. Giuseppe
VERY new to splunk.  I have a query that scans a vulnerability report for critical vulnerabilities: index=vulnerability severity=critical | eval first_found=replace (first_found, "T\S+", "") | eva... See more...
VERY new to splunk.  I have a query that scans a vulnerability report for critical vulnerabilities: index=vulnerability severity=critical | eval first_found=replace (first_found, "T\S+", "") | eval first_found_epoch=strptime(first_found, "%Y-%m-%d") | eval last_found=replace (last_found, "T\S+", "") | eval last_found_epoch=strptime(last_found, "%Y-%m-%d") | eval last_found_65_days=relative_time(last_found_epoch,"-65d@d") | fieldformat last_found_65_days_convert=strftime(last_found_65_days, "%Y-%m-%d") | where first_found_epoch>last_found_65_days | sort -first_found | dedup cve | rename severity AS Severity, first_found AS "First Found", last_found AS "Last Found", asset_fqdn AS Host, ipv4 AS IP, cve AS CVE, output AS Description | streamstats count as "Row #" | table Severity,"First Found","Last Found",Host,IP,CVE,Description,Reason   Which gives me output similar to this: critical 2023-10-11 2023-11-20 host1.example.com 192.168.101.12 CVE-2021-0123 blah blah blah critical 2023-03-25 2023-11-20 host2.example.com 192.168.101.25 CVE-2022-0219 blah blah blah critical 2023-06-23 2023-11-20 host3.example.com 192.168.101.102 CVE-2023-0489 blah blah blah critical 2023-08-05 2023-11-20 host4.example.com 192.168.101.145 CVE-2023-0456 blah blah blah I also have a .csv lookup file where I keep extra information on certain hosts: ScanHost                      ScanIP                   target-CVE            Reason host2.example.com 192.168.101.25 CVE-2022-0219 CVE can not be mitigated What I'm trying to do is to take the Host from the search and if it matches a ScanHost in the CSV then fill in the Reason field from the .csv.
Attached result 2.jpg
Hi ITWhisperer, I fixed it. Thank you very very much for your help, with this, it is working properly (look attached 2.jpg): | sort StartTime | eval row=mvrange(0,4) | mvexpand row | eval _time=... See more...
Hi ITWhisperer, I fixed it. Thank you very very much for your help, with this, it is working properly (look attached 2.jpg): | sort StartTime | eval row=mvrange(0,4) | mvexpand row | eval _time=case(row=0,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=1,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=2,strptime(EndTime,"%Y-%m-%d %H:%M:%S"),row=3,strptime(EndTime,"%Y-%m-%d %H:%M:%S")) | eval value=case(row=0,1,row=1,1,row=2,1,row=2,0) ´here is the difference | table _time value
Sure @ITWhisperer 
But, if I use this code on the content, which I mentioned in the main describtion, I receive these results (see attch 1.jpg). And this is quiet good for me, except the triangel step. Any idea, how to... See more...
But, if I use this code on the content, which I mentioned in the main describtion, I receive these results (see attch 1.jpg). And this is quiet good for me, except the triangel step. Any idea, how to fix it? | eval row=mvrange(0,4) | mvexpand row | eval _time=case(row=0,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=1,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=2,strptime(EndTime,"%Y-%m-%d %H:%M:%S"),row=3,strptime(EndTime,"%Y-%m-%d %H:%M:%S")) | eval value=case(row=0,1,row=1,1,row=2,1,row=3,0) | table _time value
Looks like you haven't evaluated _time | eval _time=case(row=0,strptime(StartTime,"%F %T.%6N"),row=1,strptime(StartTime,"%F %T.%6N"),row=2,strptime(EndTime,"%F %T.%6N"),row=3,strptime(EndTime,"%F %T... See more...
Looks like you haven't evaluated _time | eval _time=case(row=0,strptime(StartTime,"%F %T.%6N"),row=1,strptime(StartTime,"%F %T.%6N"),row=2,strptime(EndTime,"%F %T.%6N"),row=3,strptime(EndTime,"%F %T.%6N"))