All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with ... See more...
I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with an eval setting "recipients" to a list of email adresses and then using $result.recipients$ in the "action.email.to =" Howerver, for things like stats and table, this does not work as the actual values of recipients are not part of the results. So for "table" it works if I include "recipients" in the table, but that looks horrible. This can be sort of demonstrated like so where this works: index="_internal" | `recipients` | dedup log_level | table log_level | fields recipients  And this does not index="_internal" | eval recipients = "email1@email.com, email2@email.com" | dedup log_level | table log_level | fields recipients As recipients is empty So, someone suggested that one could use a savedsearches.conf.spec file to define a token like: [savedsearches] recipients = <string> and then use "recipients" in the savedsearches.conf file as $recipients$. This does not seem to be the case though, I cannot find this documented anywhere and the spec file seems to be more "instructive" than anything. Another suggestion was to define global token directly in the savedsearhes file like: [tokens] recipients = Comma-separated list of email addresses and then use $recipients$ for all "action.email.to = $recipients$" in that file. Though I cannot find the token definition solution here documented anywhere. Are any of these suggestions at all valid? Is there any way to somewhere in the app where the alerts live to define a "token" like "recipients" which can be referenced in all "action.email.to" instances in that file so that I only have to update one list in one place? Or is this a "suggested improvement" I need to submit somewhere All the best
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on... See more...
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on indexer.  It starts approximatly 21 minutes after its restart. During these 21 minutes is delay constant and low.     All Linux UFs behave in similar way. It start 21 minutes after UF restart, but period is different.    I use UF version are 9.2.0.1 and  9.2.1.    I have checked   - queues state in internal logs, it looks ok - UF truhput is set to 10240   I have independently tested that after restarting the UF the data is coming in with a low and constant delay. After about 21 minutes it stops for about 9 minutes.  After 9 minutes, a batch of messages arrive and are indexed, creating a sawtooth progression in the graph. It doesn't depend on the type of data. It behaves the same for internal UF logs and other logs.    I currently collect data using file monitor input and journald input.   I can't figure out what the problem is.   Thanks in advanced for help   Michal
Hi All, Please help me to solve the below queries in splunk classic dashboard query1:  For example, we have created a table for each alert in splunk with all the alert details as individual columns... See more...
Hi All, Please help me to solve the below queries in splunk classic dashboard query1:  For example, we have created a table for each alert in splunk with all the alert details as individual columns like alertid,alertname,alerttime,alertsummary,alertdescription etc. in a Splunk classic dashboard. So now how to add extra column as comment in above splunk table and manually enter the values in the column in each row and save it in lookup file.   query2: is it possible to add editable column in a splunk table and save the response in lookup table.if yes help me to implement the same in dashboard.
I am trying to create a bar chart that shows the total daily splunk ingestion (in TB) by day for the past month. I am using the below search, but i am not able to get the |timechart to work to displa... See more...
I am trying to create a bar chart that shows the total daily splunk ingestion (in TB) by day for the past month. I am using the below search, but i am not able to get the |timechart to work to display the total ingestion by day. What am i missing? index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sum(b) as usage | eval usage=round(usage/1024/1024/1024) | eval usage = tostring(Used, "commas")
Hi, we moved a customer from virtualized splunk indexers to physical machines with nvme storages. Since me performed this migration the customer experiences slower results when running dense searche... See more...
Hi, we moved a customer from virtualized splunk indexers to physical machines with nvme storages. Since me performed this migration the customer experiences slower results when running dense searches. So i checked the job inspector and it seems, that there is an issue . As far as i understood the value "dispatch.fetch" is the time the SH waits for the idx to return the results. Is this value based on network or storage conditions? Attached the slightly blurred job inspector
Hi, I have a field called "Employee_Email". This field contains the value: ["firstname.lastname@gmail.com"] How do I remove the special characters [" and "]?   I tried:  | eval test1 = repl... See more...
Hi, I have a field called "Employee_Email". This field contains the value: ["firstname.lastname@gmail.com"] How do I remove the special characters [" and "]?   I tried:  | eval test1 = replace (Employee_Email "[" , "")   But when I tried to remove either [ or " it gives me the following errors: Error in 'EvalCommand': Regex: missing terminating ] for character class Or: Unbalanced quotes.   Is there a way to ignore the normal effect of [ and "?
I created a splunk dashboard that has a lot of filters (multiple dropdowns), and text input with different tokens, and with dynamic tables too. I want make it dynamic foreach filter that I choose, bu... See more...
I created a splunk dashboard that has a lot of filters (multiple dropdowns), and text input with different tokens, and with dynamic tables too. I want make it dynamic foreach filter that I choose, but for now it still can't be dynamic for every existing output and filter. Here my xml:     <form version="1.1" theme="dark"> <label>Dashboard Overview</label> <fieldset submitButton="false"> <input type="time" token="global_time" searchWhenChanged="true"> <label>Select Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="owner" searchWhenChanged="true"> <label>Select Owner</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>owner</fieldForLabel> <fieldForValue>owner</fieldForValue> <search> <query>index=db_warehouse | dedup owner | fields owner | table owner</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="hostname" searchWhenChanged="true"> <label>Select Hostname</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>hostname</fieldForLabel> <fieldForValue>hostname</fieldForValue> <search> <query>index=db_warehouse hostname=$hostname$ owner=$owner$ ipaddress=$ipaddress$ cve=$cve$ cve=$cve$ | dedup hostname | fields hostname | table hostname</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <initialValue>*</initialValue> </input> <input type="dropdown" token="ipaddress" searchWhenChanged="true"> <label>Select by IP Address</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>ipaddress</fieldForLabel> <fieldForValue>dest</fieldForValue> <search> <query>index=db_warehouse | search hostname=$hostname$ owner=$owner$ ipaddress=$ipaddress$ cve=$cve$ | dedup dest | fields dest | table dest</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="text" token="cve"> <label>Search CVE</label> <default>*</default> </input> </fieldset> <table> <title>Detail Information Table</title> <search> <query>index=db_warehouse | fields _time, hostname, dest, mac_address, vulnerability_title, os_version, os_description, severity, cvss_score, last_assessed_for_vulnerabilities, solution_types,cve, owner, dest_category | search hostname=$hostname$ owner=$owner$ ipaddress=$ipaddress$ cve=$cve$ | rename dest as ip, dest_category as category | table _time, hostname, ip, mac_address, vulnerability_title, owner, category, cve, os_version, os_description, severity, cvss_score, last_assessed_for_vulnerabilities, solution_types | dedup hostname</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search>      Is there any reference or solution for this?
We have data similar to below and are looking to created a stacked timechart, however setting the stackmode does not seem to have any impact on the chart timestamp System Value TIME1 SYS1 VALUE1.... See more...
We have data similar to below and are looking to created a stacked timechart, however setting the stackmode does not seem to have any impact on the chart timestamp System Value TIME1 SYS1 VALUE1.1 TIME1 SYS2 VALUE2.1 TIME1 SYS3 VALUE3.1 TIME1 SYS4 VALUE4.1 TIME2 SYS1 VALUE1.2 TIME2 SYS2 VALUE2.2 TIME2 SYS3 VALUE3.2 TIME2 SYS4 VALUE4.2 timechart latest(Value) by System <option name="charting.chart.stackMode">stacked</option>
Hi, can anyone help me with the solution please. I have wineventlog as below. By default it considering the whitespace while parsing the fieldname. For eg: it should extract the field name as "Prov... See more...
Hi, can anyone help me with the solution please. I have wineventlog as below. By default it considering the whitespace while parsing the fieldname. For eg: it should extract the field name as "Provider Name", but instead it is extracting the field name as "Name". It considering whitespace and extracting the filename. Similarly I have many fields as highlighted below. please guide me where I have to make such change to get the correct field names. Sample Log: <Event xmlns='http://XXX.YYYY.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{12345-1111-2222-a5ba-XXX}'/><EventID>2222</EventID><Version>0</Version><Level>0</Level><Task>12345</Task><Opcode>0</Opcode><Keywords>1110000000000000</Keywords><TimeCreated SystemTime='2024-07-24T11:36:15.892441300Z'/><EventRecordID>0123456789</EventRecordID><Correlation ActivityID='{11aa2222-abc2-0001-0002-XXXX1122}'/><Execution ProcessID='111' ThreadID='111'/><Channel>Security</Channel><Computer>YYY.xxx.com</Computer><Security/></System><EventData><Data Name='MemberName'>-</Data><Data Name='MemberSid'>CORP\gpininfra-svcaccounts</Data><Data Name='TargetUserName'>Administrators</Data><Data Name='TargetDomainName'>Builtin</Data><Data Name='TargetSid'>BUILTIN\Administrators</Data><Data Name='SubjectUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='SubjectUserName'>xyz$</Data><Data Name='SubjectDomainName'>CORP</Data><Data Name='SubjectLogonId'>1A2B</Data><Data Name='PrivilegeList'>-</Data></EventData></Event>
Hello i want to extract ip field from a log but i give error. this is a part of my log: ",\"SourceIp\":\"10.10.6.0\",\"N i want 10.10.6.0 as a field. can you help me?  
Hello I am building an app using the Splunk Add-on builder.  Can I use the helper.new_event method in order to send a metric to the metrics index?  If yes, what should be the format of the "event"... See more...
Hello I am building an app using the Splunk Add-on builder.  Can I use the helper.new_event method in order to send a metric to the metrics index?  If yes, what should be the format of the "event" ?    Kind regards,
How i can display the data sum of 2 fields like Last month same date data (example: 24 june and 24 may) I have tried the below query i was getting the data but how i can show in a manner. index=gc... See more...
How i can display the data sum of 2 fields like Last month same date data (example: 24 june and 24 may) I have tried the below query i was getting the data but how i can show in a manner. index=gc source=apps | eval AMT=if(IND="DR", BASE_AMT*-1, BASE_AMT) | eval GLBL1=if(FCR="DR", GLBL*-1, GLBL) | eval DATE="20".substr(REC_DATE,1,2).substr(REC_DATE,3,2).substr(REC_DATE,5,2) | eval current_pdate_4=strftime(relative_time(now(), "-30d@d"),"%Y%m%d") | where DATE = current_pdate_4 | stats sum(AMT) as w4AMT, sum(GLBL1) as w4FEE_AMT by DATE id |append [search index=gc source=apps | eval AMT=if(IND="DR", BASE_AMT*-1, BASE_AMT) | eval GLBL1=if(FCR="DR", GLBL*-1, GLBL) | eval DATE="20".substr(REC_DATE,1,2).substr(REC_DATE,3,2).substr(REC_DATE,5,2) | eval current_pdate_3=strftime(relative_time(now(), "-@d"),"%Y%m%d") | where DATE = current_pdate_3 | stats sum(AMT) as w3AMT, sum(GLBL1) as w3FEE_AMT by DATE id | table DATE, id w3AMT, w4AMT, w4FEE_AMT w3FEE_AMT | rename Date as currentDATE, w3AMT as currentdata, w3FEE_AMT as currentamt w4AMT as lastmonthdate w4FEE_AMT as lastmonthdateamt DATE, id currentdata lastmonthdate currentamt lastmonthdateamt 20240723 2 2323 2123 23 24 20240723 3 2423 2123 23 24 20240723 4 2223 2123 23 24 20240723 5 2323 2123 23 24 20240723 6 2329 2123 23 24 20240723 7 2323 2123 23 24
Hi Team,   Could you please help me on the logic on to download the crowdstrike sandboxed  analysis report using Splunk soar. Thanks in advance Regards, Harisha
Hello Splunkers i have clustered splunk 9.2.1 on prem, i have pushed an app from the CM to search head cluster and trying to configure a data input through the search head (option is not available f... See more...
Hello Splunkers i have clustered splunk 9.2.1 on prem, i have pushed an app from the CM to search head cluster and trying to configure a data input through the search head (option is not available from the CM) whenever i add a data input i always face this error "Current instance is running in SHC mode and is not able to add new inputs" how can i fix this ?  
Hi, guys : forgive my English level first, it is not my native language. I have a distributed search which consists of an indexer instance and a search head instance, Their host specifications are ... See more...
Hi, guys : forgive my English level first, it is not my native language. I have a distributed search which consists of an indexer instance and a search head instance, Their host specifications are as follows:     indexer CPU:E5-2682 v4 @ 2.50GHz / 16Core Memory:32G Dsik:1.8TB(5000IOPS) search head: CPU:E5-2680 v3 @ 2.50GHz / 16Core Memory:32G Disk:200GB(3400IOPS).       I have 170G of raw logs ingested into splunk indexer every day ,5 indexes, one of which is 1.3TB in size. Its index name is tomcat , which stores the logs of the backend application. now the index is full. When I search for events in this index, the search speed is very slow. My search is     index=tomcat uri="/xxx/xxx/xxx/xxx/xxx" "xxxx"       I'm very sorry that I use xxx to represent a certain word because it involves the privacy issues of the API interface. I am searching for events from 7 days ago, no results found were returned for a long time,I even tried searching the logs for a specific day,but the search speed is still not ideal. If I wait about 5 minutes, I will gradually see some events appear on the page.  I checked the job inspector, I found that command.search.index, dispatch.finalizeRemoteTimeline, and dispatch.fetch.rcp.phase_0 execution cost is high   but these don't help me much.I tried leaving the search head and performing a search on the indexer web ui, but the search was still slow. this means that there is no bottleneck in the search head? During the search, I observed the various indicators of the host monitoring, the screenshot is as follows: It seems that the indexer server resources are not completely exhausted. So I tried restarting the indexer's splunkd service,Unexpectedly, the search speed seems to have been relieved,When I use the same search query and time range, it is gradually showing the events returned, although the speed does not seem to be particularly fast. Just as I was celebrating that I had solved the problem, my colleague told me the next day that the search speed seemed to be a little unsatisfactory again, although the search results would be gradually returned during the searching.so, this is not the best solution, it can only temporarily relieve. so, how do you think I should solve the problem of slow search speed? Is it to scale out the indexers horizontally and create a indexer cluster?    
I have two searches, one search will produce icinga problem alerts and other search will produce icinga recovery alerts. I wanted to compare host with State fields, if the icinga alert has been recov... See more...
I have two searches, one search will produce icinga problem alerts and other search will produce icinga recovery alerts. I wanted to compare host with State fields, if the icinga alert has been recovered within 15 minutes duration no action to be taken else execute script. First search, below is the snippet.   Second query, below is the snippet    
Hello team, Am working with dovecot logs-- it's a mail logs. I managed to integrate it with Splunk through syslog. it gives me the logs in this format (Attached screenshot) Now, I want to... See more...
Hello team, Am working with dovecot logs-- it's a mail logs. I managed to integrate it with Splunk through syslog. it gives me the logs in this format (Attached screenshot) Now, I want to create a new field to have value of to/receiver From the screenshot the value of to/receiver is in lda(value) NOTE: on the below screenshot I dont have to/receiver values i just have from/sender and subject   Help me please !
  Hello Splunkers!! Please help me to fix this time zone issue. Thanks in advance!!
index=testindex source=application.logs |rex "ErrorCode\:\[?<Error_Code>\d+]" |search Error_Code IN(200, 500, 400, 505, 500) |stats count by Error_Code |Where count > 5 output: Error_Code count... See more...
index=testindex source=application.logs |rex "ErrorCode\:\[?<Error_Code>\d+]" |search Error_Code IN(200, 500, 400, 505, 500) |stats count by Error_Code |Where count > 5 output: Error_Code count 200 20 500 100 400 40 505 45 500 32 Instead of Errorcodes we want to display a custom text  as shown below. How can we do this?? Expected output: Error_Code count Application received with errorcode 200 20 Application received with errorcode 500 100 Application received with errorcode 400 40 Application received with errorcode 505 45 Application received with errorcode 500 32  
 I've been debugging my inner join query for hours, and that's why I'm here with my first question for this community. We have a csv lookup table with fields "Host_Name", "IP", and others, based on o... See more...
 I've been debugging my inner join query for hours, and that's why I'm here with my first question for this community. We have a csv lookup table with fields "Host_Name", "IP", and others, based on our known hosts that should be reporting. Note: in our Splunk logs, for some hosts the splunk "host" field matches the lookup table "Host_Name" field, and some hosts match the "IP" field. For this reason, when we add a new host, we add 2 rows to the lookup, and place the host name and the IP in both fields of the lookup. (Long story.) Our Lookup ("System_Hosts.csv") looks like this: Host_Name          IP Foo Bar ServerA 123.45.6.7 xyz abc 123.45.6.7 ServerA def ghi ServerB ...and so on       Queries that don't work. (This is a very oversimplified stub of the query, but I'm debugging and brought it down to the smallest code that doesn't function): index=myindex | join type=inner host [|inputlookup System_Hosts.csv | fields Host_Name, IP] | table host (Removing one of the fields from the lookup, just in case I don't understand inner join, and the splunk host has to match both "Host_Name" and "IP" lookup fields to return results): index=myindex | join type=inner host [|inputlookup System_Hosts.csv | fields Host_Name] (Removing "type=inner" optional parameter also doesn't work as expected. Inner is default type.)  Queries that DO work: (To verify logs and hosts exist, and visually match the hosts to lookup table:) index=myindex | table host (To verify lookup is accessible, fields and syntax are accurate:) index=myindex | inputlookup System_Hosts.csv | fields Host_Name, IP | table Host_Name, IP (To make me crazy? Outer join works. But this just returns all hosts from every log.) index=myindex | join type=outer host [|inputlookup System_Hosts.csv | fields Host_Name, IP | table host  So these have been verified: spelling of the lookup spelling of the lookup fields permission to access the lookup syntax of the entire query without the "type=inner" optional argument  From my understanding, when this works, the query will return a table with hosts that match entries in the "Host_Name" OR "IP" fields from the lookup. If I don't understand inner join please tell me, but this is secondary to making inner join work at all, because as you can see above, I try to match only the "Host_Name" field with no success. I'm pulling my hair out! Please help!