All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a csv file which has field Account and it has over 1000+. In my logs it is named as yourAccount. how do i find the all the account logs from that csv file. Also can i rex the field and have th... See more...
I have a csv file which has field Account and it has over 1000+. In my logs it is named as yourAccount. how do i find the all the account logs from that csv file. Also can i rex the field and have the table for that as well in same query?
Hi, I have the below source, values in Red will keep changing source="/Application/logs/b80be40606aa7860f7de0c7ffa6b9d740581ec6035bc450ff5dfa3/apply-service/example.google.local:9818/Application-se... See more...
Hi, I have the below source, values in Red will keep changing source="/Application/logs/b80be40606aa7860f7de0c7ffa6b9d740581ec6035bc450ff5dfa3/apply-service/example.google.local:9818/Application-services/applyy-service:build-000/apply-service-464-xmp/system-out-dev.stdout"   in props.conf i tried below definitions but it is unable to pickup, how can i use wild cards to pick the correct source? 1) [source::/Application*] TRANSFORMS-anonymize = address-anonymizer 2) [source::/Application/logs/*/apply-service/*/Application-services/*/*/system-out-dev.stdout] TRANSFORMS-anonymize = address-anonymizer    
Hi All, I'm new to Splunk.  I'm not much familiar with the query search and lookup files. I have a custom IOC file with IPs & URLs and I want to search if there was any traffic to that destination. ... See more...
Hi All, I'm new to Splunk.  I'm not much familiar with the query search and lookup files. I have a custom IOC file with IPs & URLs and I want to search if there was any traffic to that destination. I went through few of the blogs and the suggestion was to create a csv lookup file.  Could you please let me know if it is the correct approach or is there any better way to search the IOCs?
I defined two eventypes: "loginAttempt" and "loginSuccess".  Now I am trying to create a chart where counts of both of these events are displayed side by side, per hour, to create a visual representa... See more...
I defined two eventypes: "loginAttempt" and "loginSuccess".  Now I am trying to create a chart where counts of both of these events are displayed side by side, per hour, to create a visual representation of the gap between attempted vs successful logins for each hour. Tabular representation would be something like: Date | Hour | Count of Attempts | Count of Successful I got individual counts working, but having a hard time figuring out how to combine the two while adding them up per hour.  Any help is greatly appreciated.
If you have have upgraded or planning to upgrade your Splunk Ent. to 8.2.2 & planning to upgrade your ES as well in the process. Please share your experience, do's & don'ts, and any KB links & docume... See more...
If you have have upgraded or planning to upgrade your Splunk Ent. to 8.2.2 & planning to upgrade your ES as well in the process. Please share your experience, do's & don'ts, and any KB links & documentation. Thank u in advance. 
hi how can i show max duration per servername?   index="my-index"        | rex "duration\[(?<duration>\d+.\d+)" | rex "id\[(?<id>\d+)" | rex "servername\[(?<servername>\w+)" | stats   max(dur... See more...
hi how can i show max duration per servername?   index="my-index"        | rex "duration\[(?<duration>\d+.\d+)" | rex "id\[(?<id>\d+)" | rex "servername\[(?<servername>\w+)" | stats   max(duration) as MAXduration by servername | table _time MAXduration id _raw this spl not show  (_time id _raw) on table! it just show MAXduration. I search about this and some people suggest use eventstats or streamstats. but now i have another problem. streamstats show (_time id _raw) correctly but same MAXduration for all servername. | streamstats max(duration) as MAXduration by servername _time                       MAXduration     id     _raw 00:12:00.000    1.2323                 921    00:12:00.000 info duration[1.2323]id[921]servername[server1] 00:12:00.000    1.4434                 956    00:12:00.000 info duration[1.4434]id[956]servername[server1] 00:12:00.000    1.9998                  231    00:12:00.000 info duration[1.9998]id[231]servername[server2] 00:12:00.000    1.8873                  543    00:12:00.000 info duration[0.8873]id[543]servername[server2] ... main goal is show maximum duration for each server. excpected output: _time                       MAXduration id     _raw 00:12:00.000    1.2323              921    00:12:00.000 info duration[1.2323]id[921]servername[server1] 00:12:00.000    1.6454              920    00:12:00.000 info duration[1.6454]id[920]servername[server2] 00:12:00.000    1.2545                821    00:12:00.000 info duration[1.2545]id[821]servername[server3] 00:12:00.000    0.1123                321    00:12:00.000 info duration[0.1123]id[321]servername[server4] any idea? thanks
Greetings!!!   Hello everyone, I have got an issue after ADDING LICENSE  trial ,I CANNOT SEARCH WHEN SEARCHING i got this error:  5 errors occured while the search was executing. therefore search r... See more...
Greetings!!!   Hello everyone, I have got an issue after ADDING LICENSE  trial ,I CANNOT SEARCH WHEN SEARCHING i got this error:  5 errors occured while the search was executing. therefore search results might be incomplete. * [splunkindexer1] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * * [splunkindexer2] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * * [splunkindexer3] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * * [splunkindexer4] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * [splunkindexer5] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. It shows the above error and after installing the license trial and restart splunk; I also restarted splunkforwarders service ,  I got the above error while I search in "search&reporting app" and other new error which shows me many DMC alert is disabled? this may cause to not searching and showing the above error to all indexers? Kindly help me on this?   , Thank you in advance.
Hello All, I need to alert when the perc75(totalfilter) value reached greater than 40000 within 10 mins or more. I am sharing my original query and now I am looking for the above condition to be app... See more...
Hello All, I need to alert when the perc75(totalfilter) value reached greater than 40000 within 10 mins or more. I am sharing my original query and now I am looking for the above condition to be append with the below query to trigger alert   index=clai_pd env=pd*cloud* perflog getprovider RASNewDispatch-Ext_RASDispatchDetailScreen-getProviderNext_act OR RASDispatchPage-RASDispatchPanelSet-RASDispatchCardPanel-getProvider_act | timechart span=10m perc50(totalfilter), perc75(totalfilter) by count
hi   I use the code below in order to display a single panel value count on the last 7 days index=mesures sourcetype=sign | fields sig_id | stats dc(sig_id) Now, I need to change it by a trend ... See more...
hi   I use the code below in order to display a single panel value count on the last 7 days index=mesures sourcetype=sign | fields sig_id | stats dc(sig_id) Now, I need to change it by a trend indicator in order to  have the single panel count but also in order to have the trend for the week before the last 7 days So you can see the code in my xml file, but I have "0" displayed in my count and obviously "0" for my trend <panel> <single> <search> <query>`index=mesures sourcetype=sign | fields sig_id | timechart dc(sig_id)</query> <earliest>-7d@d</earliest> <latest>now</latest> </search> <option name="colorBy">trend</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="height">200</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,5,10]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trendColorInterpretation">standard</option> <option name="trendInterval">-7d</option> <option name="underLabel">Compared to last week</option> <option name="useColors">1</option> </single> </panel> So what is wrong please?    
Good day. I have a CSV File like this. I wanted to do this ingestion via monitoring in inputs. And we should not use INDEXED_EXTRACTIONS=CSV as well. My ingestion is working fine as like normal monit... See more...
Good day. I have a CSV File like this. I wanted to do this ingestion via monitoring in inputs. And we should not use INDEXED_EXTRACTIONS=CSV as well. My ingestion is working fine as like normal monitoring Question : How to add the fields which is present in header. I had like this in my props which was not working props FIELD_DELIMITER = , FIELD_NAMES="name","age","class"   Do I need to any other like FIELD _QUOTE for include the double quotes or any other header attribute to include? Kindly help   "name","age","class" "alice","26","grade3" "bob","24","grade2"
Ehhh, I'm trying to set up polling for remote events using WMI (yes, I know it's easier to install UF on the destination machine but I can't do it in this case). I know the docs (https://docs.splunk... See more...
Ehhh, I'm trying to set up polling for remote events using WMI (yes, I know it's easier to install UF on the destination machine but I can't do it in this case). I know the docs (https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/MonitorWMIdata) and I'm trying to do as it says. In the lab for now, so some permissions might be overly loose. I created a domain account gave it full local admin rights on the Splunk UF machine, I installed UF to run with this account. I installed the TA for windows. Pointed the UF to the indexer - first success - the _internal log is filling with logs from the forwarder. So the connectivity works. Added proper Security Policies through GPO, added DCOM group membership, added WMI namespace security permissions to \\root and \\root\cimv2 for my UF domain user. Added wmi.conf stanza for remote WMI. And I'm getting 09-20-2021 16:53:44.622 +0200 ERROR ExecProcessor [8152 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query failed (query="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Security"") (error="Current user does not have permission to perform the action." HRESULT=80041003) (ad.lab: Security) If I do wbemtest, as described in https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/TroubleshootingWMI  I can connect and authenticate properly to the remote server. But if I do the test query I get empty results. Of course the logs on the queried server's side are very helpful - say that there is an error and "cause=unknown". Id = {00000000-0000-0000-0000-000000000000}; ClientMachine = SEP; User = LAB\splunkuf; ClientProcessId = 5860; Component = Unknown; Operation = Start IWbemServices::ExecNotificationQuery - root\cimv2 : SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Security"; ResultCode = 0x80041032; PossibleCause = Unknown Debugging on UF's side also isn't very helpful 09-20-2021 17:57:28.845 +0200 DEBUG ExecProcessor [3796 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-20-2021 17:57:28.845 +0200 INFO ExecProcessor [3796 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 0 microseconds) 09-20-2021 17:57:28.939 +0200 DEBUG ExecProcessor [3796 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Application"" (ad.lab: Application) 09-20-2021 17:57:28.939 +0200 ERROR ExecProcessor [3796 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query failed (query="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Application"") (error="Current user does not have permission to perform the action." HRESULT=80041003) (ad.lab: Application) 09-20-2021 17:57:28.939 +0200 INFO ExecProcessor [3796 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Will retry connection to WMI provider after 5.000 seconds (ad.lab: Application) As tou can see, I can connect but the WMI query fails. And now I don't know whether it's the case of WMI-level permissions, some other permissions? I hope I don't have to add additional permissions to event logs because that's ridiculous and no sane administrator will let me anter SDDL's directly into registry. If I run the event viewer on the splunk UF machine as the splunk UF domain user, I can connect to the server I want to monitor, but I cannot open any logs. It says "Event Viewer cannot open the event log or custom view. Verify that Event Log service is runnint or query is too long. The operation completed successfully. (5)" Any hints where to look for help?
Hi, I have a table built with a chart command :  | stats count by _time Id statut | xyseries Id statut _time Before using xyseries command, _time values was readable but after applying xyseries c... See more...
Hi, I have a table built with a chart command :  | stats count by _time Id statut | xyseries Id statut _time Before using xyseries command, _time values was readable but after applying xyseries command we get epoch time like this : Can you help me to transform the epoch time to readable time ?
Hello team! How are u? I have a question about how to search with a comma separated values:   Example: I have an index with vm's information, like this:   In the column "datastores" returns... See more...
Hello team! How are u? I have a question about how to search with a comma separated values:   Example: I have an index with vm's information, like this:   In the column "datastores" returns me all datastores assigned to this VM, so I need to calculate how much freespace I have in this VM. So I have another index, with the "datastores" information, like this:   I know how to filter just one result, but I don't know how I can filter two or more comma-separated datastores. Can anyone help me? Thanks in advanced
Hello All, Our environment consists of an indexer cluster scaled for 1 TB of data per day. On average, we have about 30 users logged running ad-hoc searches and about 40 scheduled searches running a... See more...
Hello All, Our environment consists of an indexer cluster scaled for 1 TB of data per day. On average, we have about 30 users logged running ad-hoc searches and about 40 scheduled searches running along side those queries. For 360 days of the year, the average CPU of our indexer cluster is no higher than 25%. But for 1 week of the year during the thanksgiving time range, we have about 65 users logged in, running ad-hoc queries, and loading multiple dashboards to monitor sales data during this time of the year. During this week of the year, the CPU on our indexers stays consistently at 90%-100% which we have attributed to many users loading dashboards with many panels simultaneously along with normal ad-hoc and scheduled searching. My question is, what recommendations are our there for combating this increased usage and prevent the CPU from being pegged at 100% for 1 week of the year? We are thinking about limiting the amount of searches each user is allowed to run concurrently but fear that many users will complain that their searches are queued. Any suggestions are much appreciated. Respectfully,
Hi,  I am stuck on the end of a search with the foreach command.  Here is my command : | stats count as count by _time Id statut | xyseries Id statut count | fillnull | foreach count [ eval <<... See more...
Hi,  I am stuck on the end of a search with the foreach command.  Here is my command : | stats count as count by _time Id statut | xyseries Id statut count | fillnull | foreach count [ eval <<FIELD>>=case(isnum(<<FIELD>>) AND <<FIELD>>=0," ",isnum(<<FIELD>>) AND <<FIELD>>>=1," ",true(),<<FIELD>>)] it gives me a table with 0 and 1 values but it does not display the icon I put in the foreach command :   Can you help me to troubleshoot please ?
Hi, i have more ip address in a field like this: host |     IP               h1         10.0.2.2; 10.0.2.1 h2         10.0.2.3; 10.0.1.1 h3         10.2.2.2   I want expand the IP field like th... See more...
Hi, i have more ip address in a field like this: host |     IP               h1         10.0.2.2; 10.0.2.1 h2         10.0.2.3; 10.0.1.1 h3         10.2.2.2   I want expand the IP field like this h1 10.0.2.2 h1 10.0.2.1 h2 10.0.2.3 h2 10.0.1.1 h3 10.2.2.2   is there a way to produce a result like this?   Thanks
One of my ISSO's asked that I scan an implementation of the Splunk WebUI using Burp Suite Enterprise (similar to a Nessus Web App scan or WebInspect scan).  What issues should I be on the look ou... See more...
One of my ISSO's asked that I scan an implementation of the Splunk WebUI using Burp Suite Enterprise (similar to a Nessus Web App scan or WebInspect scan).  What issues should I be on the look out from doing this?  Mitigations? Any known settings / best practices for this? In terms of user agreements, are there any known clauses that might limit/prevent this?   Thank you
Dear Splunk community, I have tableOne and tableTwo. TableOne has a click selection because based on what I click in tableOne, tableTwo is populated. Because I have click selection the text in table... See more...
Dear Splunk community, I have tableOne and tableTwo. TableOne has a click selection because based on what I click in tableOne, tableTwo is populated. Because I have click selection the text in tableOne is blue (hyperlink).   I want to apply a color to the text in both tables. I used the following CSS (inline in the source of the dashboard using style tags):       .shared-resultstable-resultstablerow { color: #19d18b important!; font-family: DejaVu Sans Mono, monospace }       Font family is applied to both tables, but color is only applied to tableTwo. This means that the styling from the click selection overrides my own css (color). How do I apply the color to both tables? Thanks in advance.
Please suggest a splunk query to find whether email abc@def.com successfully sent emails or any emails failed between 5-6pm to qbc@xyz.com. 
Hello All, I have a quick question about comparison fields from a lookup table.  Just imagine that I have a query like this. index=linux [|inputlookup suspicious_commands.csv where command | fiel... See more...
Hello All, I have a quick question about comparison fields from a lookup table.  Just imagine that I have a query like this. index=linux [|inputlookup suspicious_commands.csv where command | fields command ]  Basically I have a lookup table that includes some Linux commands and I want to compare it with command fields from the origin log source.  Question is that I want to run the "contains" function on the original command fields from lookup.    Let say lookup has a command like: "rm -rf" but the log itself is "/usr/bin/rm -rf." in the command field  Can I do this search based on contains instead of the exact match?