All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I suspect the issue lies with the line: | lookup EventCodes EventCode,LogName OUTPUTNEW desc I assume this is intended to use a lookup definition called EventCodes. Could you try using inputlookup ... See more...
I suspect the issue lies with the line: | lookup EventCodes EventCode,LogName OUTPUTNEW desc I assume this is intended to use a lookup definition called EventCodes. Could you try using inputlookup on EventCodes in a separate search and see which, if any, columns appear? | inputlookup EventCodes If there are no results, then either EventCodes does not exist as a lookup definition or you have no permissions to view it. If there are columns but there are none called "EventCode","LogName" or "dest", then you'll need to adjust those column names in the lookup command.
You can set recipients as a hidden field by prepending '_' to the field name. This will prevent the recipients column from appearing in the table, but the token will still work. | eval _recipients =... See more...
You can set recipients as a hidden field by prepending '_' to the field name. This will prevent the recipients column from appearing in the table, but the token will still work. | eval _recipients = "email1@email.com, email2@email.com" Then use: $result._recipients$ in the "action.email.to =" I would also suggest putting this _recipients eval at the end of your search so it does not accidentally get removed by things like "table". It should also work if you put the eval statement into a macro.
In the CSV file I have id, system, time_range, count_err I received a ready dashboard that monitors the DAGS from the AIRFLOW I am interested in creating for each DAG its own alert with the same lo... See more...
In the CSV file I have id, system, time_range, count_err I received a ready dashboard that monitors the DAGS from the AIRFLOW I am interested in creating for each DAG its own alert with the same logic of the dashboard only with a small change, in the dashboard I mark success if it returned from the AIRFLOW logs success in a time frame I gave the same field in the CSV file and ERROR if it did not return success or returned FAILED, In the alert, I want that if I receive faild as the number of times listed in the CSV file or if it does not return success at the time_range I specified in the CSV file, that it be ERROR The dashboard is taken from the file with the syntax of    [|inputlookup xxx.csv .....] |lookup xxx.csv dag_id OUTPUTNEW system time_range   And I want to add a field   |lookup xxx.csv dag_id OUTPUTNEW system time_range count_err   And I don't know why the extra field is not displayed  
Its working ! Thank you for your quick response.
Hi @Roberto.Barnes, If the reply from Manish helped, please click the "Accept as Solution" button to confirm your question has been answered. If you still need help, please reply to keep the conver... See more...
Hi @Roberto.Barnes, If the reply from Manish helped, please click the "Accept as Solution" button to confirm your question has been answered. If you still need help, please reply to keep the conversation going! 
I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with ... See more...
I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with an eval setting "recipients" to a list of email adresses and then using $result.recipients$ in the "action.email.to =" Howerver, for things like stats and table, this does not work as the actual values of recipients are not part of the results. So for "table" it works if I include "recipients" in the table, but that looks horrible. This can be sort of demonstrated like so where this works: index="_internal" | `recipients` | dedup log_level | table log_level | fields recipients  And this does not index="_internal" | eval recipients = "email1@email.com, email2@email.com" | dedup log_level | table log_level | fields recipients As recipients is empty So, someone suggested that one could use a savedsearches.conf.spec file to define a token like: [savedsearches] recipients = <string> and then use "recipients" in the savedsearches.conf file as $recipients$. This does not seem to be the case though, I cannot find this documented anywhere and the spec file seems to be more "instructive" than anything. Another suggestion was to define global token directly in the savedsearhes file like: [tokens] recipients = Comma-separated list of email addresses and then use $recipients$ for all "action.email.to = $recipients$" in that file. Though I cannot find the token definition solution here documented anywhere. Are any of these suggestions at all valid? Is there any way to somewhere in the app where the alerts live to define a "token" like "recipients" which can be referenced in all "action.email.to" instances in that file so that I only have to update one list in one place? Or is this a "suggested improvement" I need to submit somewhere All the best
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on... See more...
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on indexer.  It starts approximatly 21 minutes after its restart. During these 21 minutes is delay constant and low.     All Linux UFs behave in similar way. It start 21 minutes after UF restart, but period is different.    I use UF version are 9.2.0.1 and  9.2.1.    I have checked   - queues state in internal logs, it looks ok - UF truhput is set to 10240   I have independently tested that after restarting the UF the data is coming in with a low and constant delay. After about 21 minutes it stops for about 9 minutes.  After 9 minutes, a batch of messages arrive and are indexed, creating a sawtooth progression in the graph. It doesn't depend on the type of data. It behaves the same for internal UF logs and other logs.    I currently collect data using file monitor input and journald input.   I can't figure out what the problem is.   Thanks in advanced for help   Michal
Hello Gustavo, Yes, by default SaaS controller is SSL enabled so we need to provide secure connection otherwise Clusteragent will fail to connect to the controller. Glad that helped.  Best Rega... See more...
Hello Gustavo, Yes, by default SaaS controller is SSL enabled so we need to provide secure connection otherwise Clusteragent will fail to connect to the controller. Glad that helped.  Best Regards, Rajesh Ganapavarapu
Hi All, Please help me to solve the below queries in splunk classic dashboard query1:  For example, we have created a table for each alert in splunk with all the alert details as individual columns... See more...
Hi All, Please help me to solve the below queries in splunk classic dashboard query1:  For example, we have created a table for each alert in splunk with all the alert details as individual columns like alertid,alertname,alerttime,alertsummary,alertdescription etc. in a Splunk classic dashboard. So now how to add extra column as comment in above splunk table and manually enter the values in the column in each row and save it in lookup file.   query2: is it possible to add editable column in a splunk table and save the response in lookup table.if yes help me to implement the same in dashboard.
This is confusing.  Could you explain "convert them?" Do you mean the raw events are not in XML?  In that case, could you share raw events?  Also, French should not stop Splunk as long as it is encod... See more...
This is confusing.  Could you explain "convert them?" Do you mean the raw events are not in XML?  In that case, could you share raw events?  Also, French should not stop Splunk as long as it is encoded in UTF-8 or another compatible scheme.
sorry if it's not clear, For example, there is Hostnames A, B, C is X owner Hostnames D, E, F is the Y owner. I want each filter to be bound to tokens on other filters. So, for example, if I se... See more...
sorry if it's not clear, For example, there is Hostnames A, B, C is X owner Hostnames D, E, F is the Y owner. I want each filter to be bound to tokens on other filters. So, for example, if I set the owner filter to value X, the dropdown on Hostname filter only displays A, B, C. Or if I choose hosntname A, the owner filter only show X value, is it possible?
Replace stats in the query with timechart and it should work. index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | timechart span=1d sum(b) as usage | eval usage... See more...
Replace stats in the query with timechart and it should work. index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | timechart span=1d sum(b) as usage | eval usage=round(usage/1024/1024/1024) | eval usage = tostring(Used, "commas")  
As an alternative you can use other functions | eval trimmed_email=trim(Employee_Email,"\"[]") or | eval substr_email=substr(Employee_Email,3,len(Employee_Email)-4)
You're doing stats aggregation to a single value. Your stats sum(b) will produce just one overall number.
Hi @scout29 , see in the Monitoring Console App or in [Settins > License < License Conuption Report > previous 30 days] and you'll have your search. ciao. Giuseppe
I am trying to create a bar chart that shows the total daily splunk ingestion (in TB) by day for the past month. I am using the below search, but i am not able to get the |timechart to work to displa... See more...
I am trying to create a bar chart that shows the total daily splunk ingestion (in TB) by day for the past month. I am using the below search, but i am not able to get the |timechart to work to display the total ingestion by day. What am i missing? index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sum(b) as usage | eval usage=round(usage/1024/1024/1024) | eval usage = tostring(Used, "commas")
thanks for clarifying
You need to escape the square brackets and double quotes | eval test1=replace(replace(Employee_Email,"\[\"",""),"\"\]","")
Hi, we moved a customer from virtualized splunk indexers to physical machines with nvme storages. Since me performed this migration the customer experiences slower results when running dense searche... See more...
Hi, we moved a customer from virtualized splunk indexers to physical machines with nvme storages. Since me performed this migration the customer experiences slower results when running dense searches. So i checked the job inspector and it seems, that there is an issue . As far as i understood the value "dispatch.fetch" is the time the SH waits for the idx to return the results. Is this value based on network or storage conditions? Attached the slightly blurred job inspector