All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am new to splunk and i cannot figure out how to check the Values and evaluate True/False. Below is the query that i tried.   index=windows host=testhost1 EventID IN ("4688","4103","4104","4768... See more...
I am new to splunk and i cannot figure out how to check the Values and evaluate True/False. Below is the query that i tried.   index=windows host=testhost1 EventID IN ("4688","4103","4104","4768","4769") | eval ev_field = if('EventID' IN ("4688","4103","4104","4768","4769"), "True","False") | dedup host,EventID,ev_field | table host,EventID,ev_field   Requirement is to check if the ex: "testhost1" has particular values in the EventID field and if not to mark as "false" or something like that... The query that i made evaluates and adds TRUE to the "ev_field" for the EventIDs that it finds but i can't figure out how to add False for the EventIDs that it does not match or find in logs. The EventIDs that are not present in logs there simply wont show in results. This is the result that i get: This is what i actually need: The second image is edited just as an example to make my point what i need as a result.
Hi. I ran into a major problem, and to which I am unable to apply a real fix. I have tried all versions of Forwarders (Linux x64), from 7.x.x to 8.x.x, the problem is always the same. I have a ... See more...
Hi. I ran into a major problem, and to which I am unable to apply a real fix. I have tried all versions of Forwarders (Linux x64), from 7.x.x to 8.x.x, the problem is always the same. I have a directory with millions of small files, about 150.000 are being generated per day, continuously generating. To manage this path, the Forwarder starts to occupy several GB of RAM, instead of the few standard MB. I tested an ignoreOlderThan = 1h (also 5m) in inputs but the result does not change, the GB are always several (about 20% of the System RAM) Is there a method to avoid this excessive consumption? Thank you.
I'm using Splunk Enterprise 8.2.5 on Windows (both indexers and Forwarders). I have modified inputs.conf on the indexer as follows to referebce my PJI signed certificate/key pair: [splunktcp-ssl:99... See more...
I'm using Splunk Enterprise 8.2.5 on Windows (both indexers and Forwarders). I have modified inputs.conf on the indexer as follows to referebce my PJI signed certificate/key pair: [splunktcp-ssl:9998] disabled = 0 [SSL] serverCert = C:\Program Files\Splunk\etc\auth\mycert\my.pem sslPassword = mypassword requireClientCert = false sslVersions = *,-ssl2,-ssl3,-tls1.0,-tls1.1 After service restart I see port 9998 listening on the indexer. I added the following config to the outputs.conf of my forwarder: [tcpout:production] server = myindexerfqdn:9998 useSSL = true No data is getting forwarded though and the following is raised in splunkd.log at the forwarder: 03-29-2022 13:01:11.229 +0100 ERROR SSLCommon [37916 parsing] - Can't read certificate file errno=33558528 error:02001000:system library:fopen:system library 03-29-2022 13:01:11.229 +0100 ERROR TcpOutputProc [37916 parsing] - Error initializing SSL context - check splunkd.log regarding configuration error for server myindexerfqdn:9998 What is the windows forwarder looking for? I set the indexer not to verify client certs but does the forwarder need a client certificate (self-signed or otherwise) generated regardless to use SSL ?
Hi All, Can someone please explain me the below architecture for Syslog.
i have a table in dashboard generated from stats  and a time picker in my dashboard. Based on value selected in timepicker the dashboard table value changes.The column values are not links. the first... See more...
i have a table in dashboard generated from stats  and a time picker in my dashboard. Based on value selected in timepicker the dashboard table value changes.The column values are not links. the first column has error message values which will be unique .if i click on a value in the 1st column then the events having those message (field.msg) should be displayed in another window. this is applicable for all values in that first column   Thanks for ur time and ur intention to help...
Hi All, Any idea on how to generate an alert when the password does not contain any special characters? Like when ever in my data the password is of only in plain text format, I should generate an ... See more...
Hi All, Any idea on how to generate an alert when the password does not contain any special characters? Like when ever in my data the password is of only in plain text format, I should generate an alert. I'm a newbie,Please help out with the query. Thanks in advance.
Hi i am new to splunk. i am creating splunk dashboard.i have the interesting fields like field1.field2.x.stacktrace{} ,field1.field2.x.x.stacktrace{}, field1.field2.x.x.x.stacktrace{} ,fieldN.msg , f... See more...
Hi i am new to splunk. i am creating splunk dashboard.i have the interesting fields like field1.field2.x.stacktrace{} ,field1.field2.x.x.stacktrace{}, field1.field2.x.x.x.stacktrace{} ,fieldN.msg , field.time i am counting based on fieldN.msg  and displaying latest(field.time) ,count(fieldN.msg) for each group using stats( stats count(fieldN.msg) , latest(field.time) by fieldN.msg) some events has values in field1.field2.x.stacktrace{}  or field1.field2.x.x.stacktrace{} or field1.field2.x.x.x.stacktrace{} . for some events those fields are not even available.  for some events it may be available in field1.field2.x.stacktrace{} and field1.field2.x.x.stacktrace{}  fields as well How can i get the latest stacktrace of each group as another field in stats table if the stacktrace is available in any level or if its not available in any event of the group then "NA" has to be displayed  
Hi Team,    I have two reports where one report(report1)has timestamp field where other report(report2) doesn't have it has only the date in the source filename. Report 2 will not be send it to spl... See more...
Hi Team,    I have two reports where one report(report1)has timestamp field where other report(report2) doesn't have it has only the date in the source filename. Report 2 will not be send it to splunk in realtime. Now i would like to combine those two reports and populate the data. I have extracted the date from source.(search time)  from report1 & report2 one common field is there.. I have already ingested the file how do i compare the date and populate the data. Can anyone suggest "where clause" can be used in this case?
Hello Splunk commu! I am using Indexers as Virtual Machine in VMWare, and I would like to increase the size of the drive where my logs are indexed.  I am using lvm and I would like to know if t... See more...
Hello Splunk commu! I am using Indexers as Virtual Machine in VMWare, and I would like to increase the size of the drive where my logs are indexed.  I am using lvm and I would like to know if there are some best practices to follow before extending the drive size on VMware side. Do I need to stop Splunk on the indexers before increasing the drive size ? What are the potential risks during this operation ?  Thanks a lot for your help! 
Good morning fellow Splunkthiasts! I am trying to build some dashboard using Splunk REST, unfortunately I can not get the data from certain endpoints when using | rest SPL command, while CURL appro... See more...
Good morning fellow Splunkthiasts! I am trying to build some dashboard using Splunk REST, unfortunately I can not get the data from certain endpoints when using | rest SPL command, while CURL approach returns what is expected. To be specific, I want to read /services/search/jobs/<SID>/summary endpoint. Following SPL returns 0 results:       | rest /services/search/jobs/1648543133.8/summary       When called externally, the endpoint works as expected:       [2022-03-29 10:46:25] root@splunk1.lab2.local:~# curl -k -u admin:pass https://localhost:8089/services/search/jobs/1648543133.8/summary --get | head % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15578 100 15578 0 0 661k 0 --:--:-- --:--:-- --:--:-- 661k <?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder> <field>_bkt</field> <field>_cd</field> <field>_eventtype_color</field> <field>_indextime</field> <field>_kv</field> <field>_raw</field>        The same happens with /services/search/jobs/<SID>/results and /services/search/jobs/<SID>/events. When I call /services/search/jobs/ or /services/search/jobs/<SID>, data is returned by both SPL and CURL. I tried this on several Splunk instances with versions ranging from 8.2.3 back to 7.3.3, always using account with admin role - the behavior is always exactly the same. Any hints what I might be missing?
I am getting error "unable to verify SSL certificate" error while AWS / GCP addon is trying to fetch data from AWS / GCP cloud respectively. I need to configure Proxy to access internet from Splunk e... See more...
I am getting error "unable to verify SSL certificate" error while AWS / GCP addon is trying to fetch data from AWS / GCP cloud respectively. I need to configure Proxy to access internet from Splunk enterprise servers.    Please let me know how to get it the proxy certificate incorporated into splunk servers.    PS : when I am trying to connect directly to internet without Proxy , I am not facing any issues. 
Hi we have a microservices based system and have several services running , the developers put unti a lookup table the complete search string, . I am ablr to retrieve the string from lookup but not... See more...
Hi we have a microservices based system and have several services running , the developers put unti a lookup table the complete search string, . I am ablr to retrieve the string from lookup but not able to execute it | inputlookup searchstring.csv | streamstats count as Rowcount | where Rowcount =1 | search Search_String   a sample of what is there in Search_String , this one is simple but sometimes there are complex queries   index=abc* AND source= xyz* AND host=* AND ERROR=50* | stats count as 5xx_Errors     how to make the Search string in lookup execute
Here's my json example file, log.json:   {"ts":"2022-01-01 01:22:34","message":"test4"} {"ts":"2022-01-01 01:22:35","message":"test5"} {"ts":"2022-01-01 01:22:36","message":"test6"}   And her... See more...
Here's my json example file, log.json:   {"ts":"2022-01-01 01:22:34","message":"test4"} {"ts":"2022-01-01 01:22:35","message":"test5"} {"ts":"2022-01-01 01:22:36","message":"test6"}   And here's a props.conf that at least parses the json:   [ json_test ] DATETIME_CONFIG=CURRENT INDEXED_EXTRACTIONS=json NO_BINARY_CHECK=true SHOULD_LINEMERGE=false   But when I try to get "ts" to be parsed as the timestamp, it fails completely:   [ json_test ] CHARSET=UTF-8 DATETIME_CONFIG=None INDEXED_EXTRACTIONS=json NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y-%m-%d %H:%M:%S TIMESTAMP_FIELDS=ts TZ = America/Los_Angeles   I've also tried adding TIME_PREFIX = "ts":", but again, it does nothing (and turns blue in the Add Data dialog, which I think means there's something wrong). Any idea what I'm missing?
Hi, how do i craft a search to match 2 fields from my raw events with  2 fields from a CSV file and output if one of the fields is different  ?  Requirement is to match the country_name and email ... See more...
Hi, how do i craft a search to match 2 fields from my raw events with  2 fields from a CSV file and output if one of the fields is different  ?  Requirement is to match the country_name and email from raw events versus to what is there in the csv file.  Basically If the country_name in the raw events in DIFFERENT as in if it does not match the "Country" field in the lookup (based on user's email) then display those results only.   Lookup file structure: Email_id Country         The Raw events have fields called email and country_name. Below is what i am trying but its not working.       index=xxx | search [inputlookup file.csv ] where (country_name != Country) AND (Email!=Email_id) table displayName, Email_id country_name        
Each event has been ingested twice with the same uuid. i want to keep one event only for each uuid.   How to delete one event only for each uuid?    for searching index="okta*" | dedup uuid... See more...
Each event has been ingested twice with the same uuid. i want to keep one event only for each uuid.   How to delete one event only for each uuid?    for searching index="okta*" | dedup uuid, it will show events with the unique uuid only it will show half of total events that i want.   then i run index="okta*" | dedup uuid | delete , but this operation is not allowed  it will show "this command cannot be invoked after the command simpleresultcombiner"   Anyone have suggestion?
I'm having some troubles parsing data prepended to json logs. I can do it via search, but I'd like to do it upon logging within splunk so I can search the parsed data. Can you point me in the right d... See more...
I'm having some troubles parsing data prepended to json logs. I can do it via search, but I'd like to do it upon logging within splunk so I can search the parsed data. Can you point me in the right direction and if I can do this via the UI or need to go into props.conf manually? This is working via search       sourcetype="Untangle"| rex "(?<json>\{.+)" | spath input=json         What I've tried in props.conf       [untangle] EXTRACT-untangle=(?<json>\{.+)       Example Log:       Mar 29 01:45:04 _gateway Mar 28 20:45:04 INFO uvm[0]: {"timeStamp":"2022-03-28 20:45:04.762","s2pBytes":160,"p2sBytes":65,"sessionId":107845676257000,"endTime":0,"class":"class com.untangle.uvm.app.SessionStatsEvent","c2pBytes":65,"p2cBytes":160}        
Hello, Need to create a date/time token for a dbxlookup. The default values for start needs to be Thursday from the previous week, and the end needs to be Monday from the current week. Here is th... See more...
Hello, Need to create a date/time token for a dbxlookup. The default values for start needs to be Thursday from the previous week, and the end needs to be Monday from the current week. Here is the dbxlookup command.   | dbxquery connection="SPLUNK" maxrows=0 query="select * from VW_SPLUNK where to_date (TO_CHAR (create_date_time, 'yyyy-mm-dd hh24:mi:ss'), 'yyyy-mm-dd hh24:mi:ss') between to_date ('$tokFromDate1$ $tok_startTime$','yyyy-mm-dd hh24:mi:ss') AND to_date ('$tokToDate1$ $tok_EndTime$','yyyy-mm-dd hh24:mi:ss') "   Here is the XML I have for the tokens.   <input type="radio" token="tok_toggleTime"> <label>Select date: Thu-Mon vs Mon-Thu</label> <choice value="ThM">Last Thursday to Monday</choice> <choice value="MTh">Last Monday to Thursday</choice> <change> <condition value="ThM"> <set token="tok_startTime1">06:45:00</set> <set token="tok_endTime1">09:30:00</set> <set token="tok_textStartTime">1</set> <set token="tokFromDate1">@w0-3d</set> <set token="tokToDate1">@w0+1d</set> </condition> <condition value="MTh"> <set token="tok_startTime1">06:45:00</set> <set token="tok_endTime1">09:30:00</set> <set token="tok_textEndTime">1</set> <set token="tokFromDate1">@w0+1d</set> <set token="tokToDate1">@w0-3d</set> </condition> </change> </input> <input type="text" token="tok_startTime" searchWhenChanged="true"> <label>Start Time - Can enter new time</label> <default>$tok_startTime1$</default> <suffix/> </input> <input type="text" token="tok_endTime" searchWhenChanged="true"> <label>End Time - Can enter new time</label> <default>$tok_endTime1$</default> <suffix/> </input>     Thanks and God bless, Genesius
Hi, Let's say I have a Company directory lookup (e.g. Company_Directory) and I want to lookup the entire hierarchy of supervisors for a specific employee. For instance>>> Alice reports to Bob, the... See more...
Hi, Let's say I have a Company directory lookup (e.g. Company_Directory) and I want to lookup the entire hierarchy of supervisors for a specific employee. For instance>>> Alice reports to Bob, then take Bob as new lookup criteria... Bob reports to Cathy, etc....   and append this all  in a chain of command >>> Alice, Bob, Cathy, Donna, Eric, Fred.... etc Does Splunk have a command/capability to take the results as feed back to a lookup loop? Thank you
I'm trying to create a multi-series line chart in a Splunk dashboard that shows the availability percentages of a given service across multiple individual days for a set of hosts.  In other words, da... See more...
I'm trying to create a multi-series line chart in a Splunk dashboard that shows the availability percentages of a given service across multiple individual days for a set of hosts.  In other words, date is my x-axis, availability is my y-axis, and my legend contains the various hosts.  Since a picture is worth a thousand words, here's an example of what I'm trying to create:   And here is my attempt at the query that's currently not working (it's using the rhnsd service as an example and assumes ps.sh is being run every 1800 seconds so it calculates availability based off ps.sh running 86400/1800=48 times per day):     index=os host="my-db-*" sourcetype=ps rhnsd | timechart span=1d count by host | eval availability=if(count>=86400/1800,100,count/floor(86400/1800)*100) | rename _time as Date host as Host availability as Availability | fieldformat Date = strftime(Date, "%m/%d/%Y") | chart first(Availability) over Host by Date     Any assistance with getting this query working so I can visualize it in a multi-series line chart in a dashboard panel would be greatly appreciated.
When I navigate to https://<splunk-server>:8089/ServiceNS I am running into an error. When I go to other pages..."/services" and so on, they work fine. Could I get some guidance on what to do in orde... See more...
When I navigate to https://<splunk-server>:8089/ServiceNS I am running into an error. When I go to other pages..."/services" and so on, they work fine. Could I get some guidance on what to do in order to bring this up?