All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Its working ! Thank you for your quick response.
Hi @Roberto.Barnes, If the reply from Manish helped, please click the "Accept as Solution" button to confirm your question has been answered. If you still need help, please reply to keep the conver... See more...
Hi @Roberto.Barnes, If the reply from Manish helped, please click the "Accept as Solution" button to confirm your question has been answered. If you still need help, please reply to keep the conversation going! 
I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with ... See more...
I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with an eval setting "recipients" to a list of email adresses and then using $result.recipients$ in the "action.email.to =" Howerver, for things like stats and table, this does not work as the actual values of recipients are not part of the results. So for "table" it works if I include "recipients" in the table, but that looks horrible. This can be sort of demonstrated like so where this works: index="_internal" | `recipients` | dedup log_level | table log_level | fields recipients  And this does not index="_internal" | eval recipients = "email1@email.com, email2@email.com" | dedup log_level | table log_level | fields recipients As recipients is empty So, someone suggested that one could use a savedsearches.conf.spec file to define a token like: [savedsearches] recipients = <string> and then use "recipients" in the savedsearches.conf file as $recipients$. This does not seem to be the case though, I cannot find this documented anywhere and the spec file seems to be more "instructive" than anything. Another suggestion was to define global token directly in the savedsearhes file like: [tokens] recipients = Comma-separated list of email addresses and then use $recipients$ for all "action.email.to = $recipients$" in that file. Though I cannot find the token definition solution here documented anywhere. Are any of these suggestions at all valid? Is there any way to somewhere in the app where the alerts live to define a "token" like "recipients" which can be referenced in all "action.email.to" instances in that file so that I only have to update one list in one place? Or is this a "suggested improvement" I need to submit somewhere All the best
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on... See more...
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on indexer.  It starts approximatly 21 minutes after its restart. During these 21 minutes is delay constant and low.     All Linux UFs behave in similar way. It start 21 minutes after UF restart, but period is different.    I use UF version are 9.2.0.1 and  9.2.1.    I have checked   - queues state in internal logs, it looks ok - UF truhput is set to 10240   I have independently tested that after restarting the UF the data is coming in with a low and constant delay. After about 21 minutes it stops for about 9 minutes.  After 9 minutes, a batch of messages arrive and are indexed, creating a sawtooth progression in the graph. It doesn't depend on the type of data. It behaves the same for internal UF logs and other logs.    I currently collect data using file monitor input and journald input.   I can't figure out what the problem is.   Thanks in advanced for help   Michal
Hello Gustavo, Yes, by default SaaS controller is SSL enabled so we need to provide secure connection otherwise Clusteragent will fail to connect to the controller. Glad that helped.  Best Rega... See more...
Hello Gustavo, Yes, by default SaaS controller is SSL enabled so we need to provide secure connection otherwise Clusteragent will fail to connect to the controller. Glad that helped.  Best Regards, Rajesh Ganapavarapu
Hi All, Please help me to solve the below queries in splunk classic dashboard query1:  For example, we have created a table for each alert in splunk with all the alert details as individual columns... See more...
Hi All, Please help me to solve the below queries in splunk classic dashboard query1:  For example, we have created a table for each alert in splunk with all the alert details as individual columns like alertid,alertname,alerttime,alertsummary,alertdescription etc. in a Splunk classic dashboard. So now how to add extra column as comment in above splunk table and manually enter the values in the column in each row and save it in lookup file.   query2: is it possible to add editable column in a splunk table and save the response in lookup table.if yes help me to implement the same in dashboard.
This is confusing.  Could you explain "convert them?" Do you mean the raw events are not in XML?  In that case, could you share raw events?  Also, French should not stop Splunk as long as it is encod... See more...
This is confusing.  Could you explain "convert them?" Do you mean the raw events are not in XML?  In that case, could you share raw events?  Also, French should not stop Splunk as long as it is encoded in UTF-8 or another compatible scheme.
sorry if it's not clear, For example, there is Hostnames A, B, C is X owner Hostnames D, E, F is the Y owner. I want each filter to be bound to tokens on other filters. So, for example, if I se... See more...
sorry if it's not clear, For example, there is Hostnames A, B, C is X owner Hostnames D, E, F is the Y owner. I want each filter to be bound to tokens on other filters. So, for example, if I set the owner filter to value X, the dropdown on Hostname filter only displays A, B, C. Or if I choose hosntname A, the owner filter only show X value, is it possible?
Replace stats in the query with timechart and it should work. index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | timechart span=1d sum(b) as usage | eval usage... See more...
Replace stats in the query with timechart and it should work. index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | timechart span=1d sum(b) as usage | eval usage=round(usage/1024/1024/1024) | eval usage = tostring(Used, "commas")  
As an alternative you can use other functions | eval trimmed_email=trim(Employee_Email,"\"[]") or | eval substr_email=substr(Employee_Email,3,len(Employee_Email)-4)
You're doing stats aggregation to a single value. Your stats sum(b) will produce just one overall number.
Hi @scout29 , see in the Monitoring Console App or in [Settins > License < License Conuption Report > previous 30 days] and you'll have your search. ciao. Giuseppe
I am trying to create a bar chart that shows the total daily splunk ingestion (in TB) by day for the past month. I am using the below search, but i am not able to get the |timechart to work to displa... See more...
I am trying to create a bar chart that shows the total daily splunk ingestion (in TB) by day for the past month. I am using the below search, but i am not able to get the |timechart to work to display the total ingestion by day. What am i missing? index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sum(b) as usage | eval usage=round(usage/1024/1024/1024) | eval usage = tostring(Used, "commas")
thanks for clarifying
You need to escape the square brackets and double quotes | eval test1=replace(replace(Employee_Email,"\[\"",""),"\"\]","")
Hi, we moved a customer from virtualized splunk indexers to physical machines with nvme storages. Since me performed this migration the customer experiences slower results when running dense searche... See more...
Hi, we moved a customer from virtualized splunk indexers to physical machines with nvme storages. Since me performed this migration the customer experiences slower results when running dense searches. So i checked the job inspector and it seems, that there is an issue . As far as i understood the value "dispatch.fetch" is the time the SH waits for the idx to return the results. Is this value based on network or storage conditions? Attached the slightly blurred job inspector
Hi, Thanks for your reply. I just had a look in transforms.conff file and seen such stanzas [system_props_xml_attributes] # Extracts values from following fields: # Provider: Name, Guid # TimeC... See more...
Hi, Thanks for your reply. I just had a look in transforms.conff file and seen such stanzas [system_props_xml_attributes] # Extracts values from following fields: # Provider: Name, Guid # TimeCreated: SystemTime, RawTime # Correlation: ActivityID, RelativeActivityID # Execution: ProcessID, ThreadID, ProcessorID, SessionID, KernelTime, UserTime, ProcessorTime # Security: UserID So, for the element "Provider" - Name & Guid are attributes similarly for the element "Timecreated" - systemtime & rawtime are attributes So the fields are parsing correctly right ?
Edit: I tried:  | eval test1 = replace (Employee_Email, "[" , "")
Hi, I have a field called "Employee_Email". This field contains the value: ["firstname.lastname@gmail.com"] How do I remove the special characters [" and "]?   I tried:  | eval test1 = repl... See more...
Hi, I have a field called "Employee_Email". This field contains the value: ["firstname.lastname@gmail.com"] How do I remove the special characters [" and "]?   I tried:  | eval test1 = replace (Employee_Email "[" , "")   But when I tried to remove either [ or " it gives me the following errors: Error in 'EvalCommand': Regex: missing terminating ] for character class Or: Unbalanced quotes.   Is there a way to ignore the normal effect of [ and "?