All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This helped me so much. When I say that I've been racking my brain on why this wouldn't work for the last 9 hours. I found this earlier, but I put it in the powershell. I was defeated. I then came ba... See more...
This helped me so much. When I say that I've been racking my brain on why this wouldn't work for the last 9 hours. I found this earlier, but I put it in the powershell. I was defeated. I then came back and read this again and saw that you said to put it into the cmd. It worked immediately. I'm so grateful.
I am attempting to use Splunk to remove the Oracle WebLogic files that are filling up our harddrive. I have been able to remove other files with a different filename format using the batch command. ... See more...
I am attempting to use Splunk to remove the Oracle WebLogic files that are filling up our harddrive. I have been able to remove other files with a different filename format using the batch command. But... the following stanza is not working   [batch://C:\Oracle\config\domains\csel\servers\...\DefaultAuditRecorder.*.log]   The filename format is: DefaultAuditRecorder.############.log where # is a number   Any suggestions?
Slight variation on @PickleRick example, your foreach statement only needs to be | foreach "*" [ eval <<FIELD>>=case('<<FIELD>>'=0, " ", '<<FIELD>>'>0, " ", ... See more...
Slight variation on @PickleRick example, your foreach statement only needs to be | foreach "*" [ eval <<FIELD>>=case('<<FIELD>>'=0, " ", '<<FIELD>>'>0, " ", 1==1, '<<FIELD>>') ]  The above allows for count > 1 with the green tick, but if it will either be 0 or 1 then you can make it so There is no need to test for the queue name, as long as it's never numeric
Your search= statement is simply looking for that index=oracle somewhere in the dashboard. If you have index="oracle" or index = oracle then it won't match, so it may be better to do a regex where cl... See more...
Your search= statement is simply looking for that index=oracle somewhere in the dashboard. If you have index="oracle" or index = oracle then it won't match, so it may be better to do a regex where clause, where you do ... | where match('eai:data', "(?i)index\s*(=[\s\"]*|in\s+\([\w,]*)oracle") what is an example of a metrics index search that is not showing up?  
Hi, I've been trying to follow the documentation to install the credentials for Windows for Universal Forwarder. It's been a nightmare to say the least. The documentation is rather confusing. I ran t... See more...
Hi, I've been trying to follow the documentation to install the credentials for Windows for Universal Forwarder. It's been a nightmare to say the least. The documentation is rather confusing. I ran the wget command to install the universal forwarder. I used msiexec.exe /i splunkuniversalforwarder_x86.msi RECEIVING_INDEXER="indexer1:9997" WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 AGREETOLICENSE=Yes /quiet to install and agree to the license. Now I'm stuck. I've tried following the example. Used  C:\ProgramFiles\splunkuniversalforwarder\bin\splunk.exe install app C:\Users\Ryzen5\Downloads\splunkclouduf.spl to run the file for the credentials and I'm getting errors. I tried several variations and nothing is working. I don't know if I am missing something that is glaringly obvious. Any help would be  appreciated. I followed this https://docs.splunk.com/Documentation/Forwarder/8.2.0/Forwarder/InstallaWindowsuniversalforwarderfromthecommandline for the installation and I TRIED following the windows instructions from here https://docs.splunk.com/Documentation/Forwarder/9.1.2/Forwarder/ConfigSCUFCredentials.
Hi, I am working on expanding an alert action in a custom app to allow users to select a specific "service" category of an alert via a <splunk-search-dropdown>  based on their selection of a <select... See more...
Hi, I am working on expanding an alert action in a custom app to allow users to select a specific "service" category of an alert via a <splunk-search-dropdown>  based on their selection of a <select> option for an "owner team". I've tried using tokens, as well as JS and JQuery to achieve this but to no avail - as it appears that splunk does not play well with <script> tags. Is there something I'm overlooking in the documentation? I know we can do this in dashboards , but I haven't been able to find any other community posts regarding this issue on Alert Action UIs.
I'm sorry I didn't see your reply sooner, thank you so much! You're a hero!!
Hello,   In short, I have to transmute a file, and I leverage the /vault/tmp/ directory.   I'm able to do what I want, but I'm wondering if I have to 'clean up' this /vault/tmp/ directory.   ex... See more...
Hello,   In short, I have to transmute a file, and I leverage the /vault/tmp/ directory.   I'm able to do what I want, but I'm wondering if I have to 'clean up' this /vault/tmp/ directory.   ex - I have a file I want to XOR bit by bit. I read unxord.exe bit by bit, write to /vault/tmp/xord.exe, then I do a phantom.vault_add(file_location="/vault/tmp/xord.exe"). This works fine.   Do I have to do any removal of the "/vault/tmp/xord.exe"?   I've tried to do something like: import os os.remove("/vault/tmp/xord.exe")   However, I get a path not found error.    So, how often does Phantom SOAR clean up the /vault/tmp/ directory, and can/should I remove the temp file myself?   Thanks!
Hello Team,   I have the specific data: 1-10:20:30 (This stands for 1 day 10 hours 20 minutes 30 seconds) 10:20:30 (This stands for 10 hours 20 minutes 30 seconds) 20:30 (This stands for 10 hour... See more...
Hello Team,   I have the specific data: 1-10:20:30 (This stands for 1 day 10 hours 20 minutes 30 seconds) 10:20:30 (This stands for 10 hours 20 minutes 30 seconds) 20:30 (This stands for 10 hours 20 minutes 30 seconds) I would like to create a table where I get the output in seconds ELAPSED |  seconds_elapsed 1-10:20:30  |  123630 10:20:30  |  37230 20:30  |  1230   Below is the rex format i am using to get this data. But for the last entry (20:30), i am getting an empty output. Is there a way to fix this? | rex field=ELAPSED "((?<dd>\d+)\-?)((?<hh>\d+)\:?)((?<mm>\d+)\:)?(?<ss>\d+)$" | eval elapsed_secs=(dd * 86400) + (hh * 3600) + (mm * 60) + (ss*1) | table ELAPSED seconds_elapsed _time  
Starter content for the ask-question category for the upcoming app. CISOs currently have difficulty demonstrating the value of the organization’s security program to other executives as well as bo... See more...
Starter content for the ask-question category for the upcoming app. CISOs currently have difficulty demonstrating the value of the organization’s security program to other executives as well as board members.   Additionally, meaningful performance indicators do not exist to inform executive leadership and the board of the resilience of the organization’s security posture nor do indicators exist to provide focus on the "day-to-day" tasks that are a priority. Does Splunk have Security Metrics App covering Network, Endpoint, and Vulnerability focused dashboards? Mean time to Detect Mean time to Investigate Mean time to Respond Trend of the Mean time to detect/respond/investigate % of incidents detected within the organizations SLA's % of incidents investigated within the organizations SLA's % of incidents respond within the organizations SLA's Trend of # 5,6, and 7 ES Notables last 24 hrs Risk notables last 24 hrs Trend of 9 and 10 ES notables abandoned last 24 hours Risk notables abandoned last 24 hrs Trend of 12 and 13 #splunk_security_metrics
What I am saying is that @gcusello's solution is the most generic because day, hour, minute can be algorithmically determined with no ambiguity.  These are time units, whereas year and month are cale... See more...
What I am saying is that @gcusello's solution is the most generic because day, hour, minute can be algorithmically determined with no ambiguity.  These are time units, whereas year and month are calendar concepts.  You cannot have month as duration because there is no meaningful definition of a month. (Week, on the other hand, can be calculated.)  Using year is not very accurate.  Yes, you can use 365.25 but that is still an approximation.  If you insist on using year (but forego month), you can continue to use your formula, or this simplified one   | eval duration=strptime(till,"%m/%d/%Y %I:%M %p")-strptime(from,"%m/%d/%Y %I:%M %p") | eval years = round(duration / 86400 / 365.25) | eval duration = years . " year(s) " . tostring(duration - years * 365.25 * 86400, "duration")   The only way to define a "month" with time interval (duration) is if you choose a reference calendar.  For example, use the Unix epoch zero calendar, namely 1970.   | eval duration=strptime(till,"%m/%d/%Y %I:%M %p")-strptime(from,"%m/%d/%Y %I:%M %p") | eval years = tonumber(strftime(duration, "%Y")) - 1970 | eval months = tonumber(strftime(duration, "%m")) - 1 | eval ymdt = years . " year(s) " . months . " month(s) " . strftime(duration, "%d") . " day(s) " . strftime(duration, "%T")   Of course, you can also change reference calendar to a different year.  But honestly I cannot see any useful application of this format. Here is an emulation:   | makeresults format=csv data="from, till 11/28/2023 03:38 PM, 11/28/2024 04:08 PM" ``` data emulation above ```   It will give the following duration from months till years ymdt 31624200.000000 11/28/2023 03:38 PM 0 11/28/2024 04:08 PM 1 1 year(s) 0 month(s) 01 day(s) 16:30:00
You would not be the first person to conflate the inputlookup and lookup commands.  This is a classic use case for lookup.  Insert the lookup command late in the query to pull the reason from the CSV... See more...
You would not be the first person to conflate the inputlookup and lookup commands.  This is a classic use case for lookup.  Insert the lookup command late in the query to pull the reason from the CSV. index=vulnerability severity=critical | eval first_found=replace (first_found, "T\S+", "") | eval first_found_epoch=strptime(first_found, "%Y-%m-%d") | eval last_found=replace (last_found, "T\S+", "") | eval last_found_epoch=strptime(last_found, "%Y-%m-%d") | eval last_found_65_days=relative_time(last_found_epoch,"-65d@d") | fieldformat last_found_65_days_convert=strftime(last_found_65_days, "%Y-%m-%d") | where first_found_epoch>last_found_65_days | sort -first_found | dedup cve | lookup mylookup.csv ScanHost as asset_fqdn target-CVE as cve OUTPUT Reason | rename severity AS Severity, first_found AS "First Found", last_found AS "Last Found", asset_fqdn AS Host, ipv4 AS IP, cve AS CVE, output AS Description | streamstats count as "Row #" | table Severity,"First Found","Last Found",Host,IP,CVE,Description,Reason Pro tip: do everything you can to avoid using hyphens in field names.  Splunk sometimes interprets it as a minus operator, which can break a query.
Hi this depends on your use case and where and how you have installed Splunk. In on premise in most cases block storage is the best/only suitable solution. You haven’t enough good (fast+bandwith) st... See more...
Hi this depends on your use case and where and how you have installed Splunk. In on premise in most cases block storage is the best/only suitable solution. You haven’t enough good (fast+bandwith) storage network + storage for object storage. But in cloud e.g. AWS you could use object storage (S3) if/when you could build your environment with enough big local cache (NVMe). Basically most of your requests must hit into cache and then it works, but if too many requests goes to S3 then you suffer bad response time and user experience. r. Ismo
Hi @Rinku.singh, Thanks for letting me know. Let's see if the Community can jump in and help. If you happen to find the answer yourself, please do share it here as a reply.
Hi @Shriraam.M, Do you still need help or was @Terence.Chen insight helpful?
Hi it’s like @richgalloway said, but I think that you could try to write your own “backend” and use scripted authentication method in splunk? But I’m quite sure that this is not worth of needed work... See more...
Hi it’s like @richgalloway said, but I think that you could try to write your own “backend” and use scripted authentication method in splunk? But I’m quite sure that this is not worth of needed work for create and update that backend? r. Ismo
The field name is log.stdout. | spath input=log.stdout See my earlier comment https://community.splunk.com/t5/Splunk-Search/How-to-use-spath-with-string-formatted-events/m-p/670776/highlight/true#M... See more...
The field name is log.stdout. | spath input=log.stdout See my earlier comment https://community.splunk.com/t5/Splunk-Search/How-to-use-spath-with-string-formatted-events/m-p/670776/highlight/true#M229914 
Hi @QuantumRgw , this should be a different issue: it seems that there's a delay in search executions. This could be caused by insufficient hardware resources or too many scheduled searches. Wait ... See more...
Hi @QuantumRgw , this should be a different issue: it seems that there's a delay in search executions. This could be caused by insufficient hardware resources or too many scheduled searches. Wait for a while and see if this issue continue to be present. Then analyze the load of your system using the Monitoring Console. Ciao. Giuseppe
How do I grab all of the versions of Splunk EXCEPT the top 1, basically the opposite of index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" | top limit=1 Version | table... See more...
How do I grab all of the versions of Splunk EXCEPT the top 1, basically the opposite of index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" | top limit=1 Version | table Version It would be nice if there was a top limit=-1 component.   Or, How do I negate a subsearch? index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" [search index=winconfig sourcetype="WMIC:InstalledProduct" Name="*UniversalForwarder*" | top limit=1 Version | table Version] | dedup host, Version | table host Name Version I want to search for all computers with other versions of Splunk
Hi @Mr_Adate , it isn't a good idea to add a new post, even if on the same topic, to an existing question (especially when closed!) because it's more difficoult to have an answer, so, next time open... See more...
Hi @Mr_Adate , it isn't a good idea to add a new post, even if on the same topic, to an existing question (especially when closed!) because it's more difficoult to have an answer, so, next time open a new question. Anyway, what do you mean with "compare": do you want to filter the rows of one lookup with the ones of the second? or do you want to know if a value is present in both the lookups or in one of them? if the first, please try something like this: | inputlookup Firewall_NEW_Database.csv WHERE [ | inputlookup Firewall_OLD_Database.csv | rename Firewall AS Hostname | fields Hostname ] | table Hostname Location Datacenter if the second, please try: | inputlookup Firewall_NEW_Database.csv | eval lookup="Firewall_NEW_Database.csv" | append [ | inputlookup Firewall_OLD_Database.csv | rename Firewall AS Hostname | eval lookup="Firewall_OLD_Database.csv" | fields Hostname Location Datacenter lookup ] | stats values(Location) AS Location values(Datacenter) AS Datacenter dc(lookup) AS lookup_count values(lookup) AS lookup BY Hostname | eval lookup=if(lookup_count=2, "Both lookups",lookup) | table Hostname Location Datacenter lookup Ciao. Giuseppe