All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You have a typo | where isnotnull(vuln) AND isnotnull(score) AND len(company) > 0
Hello all,  I have a lookup with a single column that lists source file names and paths.  I want to search an index and lookup the sources, then show the latest time of those sources.  I also want... See more...
Hello all,  I have a lookup with a single column that lists source file names and paths.  I want to search an index and lookup the sources, then show the latest time of those sources.  I also want to show if a file hasn't logged at all in a given timeframe. I set the lookup to use WILDCARD() in the lookup definition, but I am now struggling with the search. I basically want the search to lookup each source file, then search the index and tell me what the latest time of the log is, as well as show a "No Logs Found" if source doesn't exist. I was toying with this, but the wildcards aren't working, and I think it is because I am not using the definition.  But even so, I can't wrap my ahead around the search.     | inputlookup pvs_source_list | join type=left source [| search index=pvs | stats latest(_time) as TimeAx by source]     Thank you!
Hello, Thank you for your help. When I use one condition, it worked   | where len(company)>0   1) but when I combined "len", it didn't work - "The search job has failed due to an error. "... See more...
Hello, Thank you for your help. When I use one condition, it worked   | where len(company)>0   1) but when I combined "len", it didn't work - "The search job has failed due to an error. "   | where isnotnull(vuln) AND isnotnull(score) AND len(company>0)   2) Why can't I use  len function without "where"?      3) Can I use company=* to include "exist/non empty"?          It looks like * also didn't work Please suggest. Thanks
Hi @scout29 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi, There isn't currently a way to do this in the UI. You could accomplish this by using the API to toggle the test from "active" to true or false based on your schedule and use something like cron ... See more...
Hi, There isn't currently a way to do this in the UI. You could accomplish this by using the API to toggle the test from "active" to true or false based on your schedule and use something like cron to manage the schedule. Here's an example curl command to the API to pause a test: curl -X PUT "https://api.<YOUR REALM>.signalfx.com/v2/synthetics/tests/browser/<YOUR TEST ID>" -H "Content-Type: application/json" -H "X-SF-TOKEN: <YOUR API TOKEN>" -d '{"test": {"active": false}}'
Any update @Cansel.OZCAN?
Could you please replicate and share the search the query in splunk 
Hi, We are testing Splunk Add-on for Sysmon for Linux to ingest Sysmon data from Linux systems. Data ingestion and majority of the extractions are working fine, except the Data part.   <Data Name=... See more...
Hi, We are testing Splunk Add-on for Sysmon for Linux to ingest Sysmon data from Linux systems. Data ingestion and majority of the extractions are working fine, except the Data part.   <Data Name="FieldName">    It appears that Splunk is completely skips over this. We have Sysmon for Windows working as well and same attribute gets extracted just fine. Data format between Sysmon from Linux Vs Windows is identical, so are the transform stanza's in the TA's. Only difference I could see is that the field name in Windows is enclosed in single quotes where for Linux it is double quotes. Could this be causing the regex in TA to not work for Data ? Including some examples here.  Sample Data from Linux Sysmon   <Event><System><Provider Name="Linux-Sysmon" Guid="{ff032593-a8d3-4f13-b0d6-01fc615a0f97}"/><EventID>3</EventID><Version>5</Version><Level>4</Level><Task>3</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime="2023-11-13T13:34:45.693615000Z"/><EventRecordID>140108</EventRecordID><Correlation/><Execution ProcessID="24493" ThreadID="24493"/><Channel>Linux-Sysmon/Operational</Channel><Computer>computername</Computer><Security UserId="0"/></System><EventData><Data Name="RuleName">-</Data><Data Name="UtcTime">2023-11-13 13:34:45.697</Data><Data Name="ProcessGuid">{ba131d2e-2a52-6550-285f-207366550000}</Data><Data Name="ProcessId">64284</Data><Data Name="Image">/opt/splunkforwarder/bin/splunkd</Data><Data Name="User">root</Data><Data Name="Protocol">tcp</Data><Data Name="Initiated">true</Data><Data Name="SourceIsIpv6">false</Data><Data Name="SourceIp">x.x.x.x</Data><Data Name="SourceHostname">-</Data><Data Name="SourcePort">60164</Data><Data Name="SourcePortName">-</Data><Data Name="DestinationIsIpv6">false</Data><Data Name="DestinationIp">x.x.x.x</Data><Data Name="DestinationHostname">-</Data><Data Name="DestinationPort">8089</Data><Data Name="DestinationPortName">-</Data></EventData></Event>   Sample data from Windows Sysmon   <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Sysmon' Guid='{5770385f-c22a-43e0-bf4c-06f5698ffbd9}'/><EventID>3</EventID><Version>5</Version><Level>4</Level><Task>3</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime='2023-11-13T13:26:31.064124600Z'/><EventRecordID>1571173614</EventRecordID><Correlation/><Execution ProcessID='2988' ThreadID='5720'/><Channel>Microsoft-Windows-Sysmon/Operational</Channel><Computer>computername</Computer><Security UserID='S-1-5-18'/></System><EventData><Data Name='RuleName'>-</Data><Data Name='UtcTime'>2023-11-13 13:26:13.591</Data><Data Name='ProcessGuid'>{f4558f15-1db6-654f-8400-000000007a00}</Data><Data Name='ProcessId'>4320</Data><Data Name='Image'>C:\..\..\image.exe</Data><Data Name='User'>NT AUTHORITY\SYSTEM</Data><Data Name='Protocol'>tcp</Data><Data Name='Initiated'>true</Data><Data Name='SourceIsIpv6'>false</Data><Data Name='SourceIp'>127.0.0.1</Data><Data Name='SourceHostname'>computername</Data><Data Name='SourcePort'>64049</Data><Data Name='SourcePortName'>-</Data><Data Name='DestinationIsIpv6'>false</Data><Data Name='DestinationIp'>127.0.0.1</Data><Data Name='DestinationHostname'>computername</Data><Data Name='DestinationPort'>4932</Data><Data Name='DestinationPortName'>-</Data></EventData></Event>   Transforms on both sides are also identical except the difference for single Vs double quotes.   Linux [sysmon-data] REGEX = <Data Name="(.*?)">(.*?)</Data> FORMAT = $1::$2 Windows [sysmon-data] REGEX = <Data Name='(.*?)'>(.*?)</Data> FORMAT = $1::$2    Any clues on what could be causing Splunk to not extract Data attribute for Linux? Transforms for other elements such as Computer, Keywords are working fine, it just skips this Data part completely. Thanks,
Splunk, in particular SPL, works on a pipeline of events. Each event in the pipeline is processed. If you have a number of events over a number of days, how do you distinguish them from each other as... See more...
Splunk, in particular SPL, works on a pipeline of events. Each event in the pipeline is processed. If you have a number of events over a number of days, how do you distinguish them from each other as your event example doesn't appear to have a timestamp? That being said, if you do have a way to identify the original events, before the mvexpand, you can use stats by to gather the separate parts together again. Perhaps if you provided more representative examples of the events you are dealing with, an explanation of exactly what you are trying to achieve and a representation of your expected / desired output, we might be able to assist you further.
Thank you for your reply Cansel, it's already clear to me that with ADQL I can create single values > metrics and based on that Alerting. I'm now more interested for example to understand if I can ... See more...
Thank you for your reply Cansel, it's already clear to me that with ADQL I can create single values > metrics and based on that Alerting. I'm now more interested for example to understand if I can have an alert on a single record; for example an alert generated based on the fact that MessaegGuid has "Failed" Status. Is it possible, by your experience, to obtain this once I have the data in the analytics engine? Regards
Thank you for your inquiry. This is useful for isolated json files. However, this file is generated every day, and I'd like to display the latest 7 days' numbers in a table by date.
hi , We could be the  issue regarding the absence of Windows Security Command Line " EventCode=4688 ParentProcessName="C:\\Windows\\System32\\cmd.exe"  events. I have not blacklisted it in any of... See more...
hi , We could be the  issue regarding the absence of Windows Security Command Line " EventCode=4688 ParentProcessName="C:\\Windows\\System32\\cmd.exe"  events. I have not blacklisted it in any of the app. Thanks.
Hello @inventsekar, Thank you for your response and for sharing the details. 1. Yes, I used summary indexing to improve the performance of the search query. 2. Yes, the Splunk infrastructure is a ... See more...
Hello @inventsekar, Thank you for your response and for sharing the details. 1. Yes, I used summary indexing to improve the performance of the search query. 2. Yes, the Splunk infrastructure is a clustered environment. Thank you for sharing the documents for improving search performance and for sharing that Splunk can run the search without any negative implications or major issues. Taruchit
Hi @inventsekar , Can we check in the windows system ?
>>> I have a SPL which is scheduled to run each minute for a span of 1 hour. >>>On each execution the search runs for 4 seconds with size of around 400KB. >>>Thus, how does the scheduler and search... See more...
>>> I have a SPL which is scheduled to run each minute for a span of 1 hour. >>>On each execution the search runs for 4 seconds with size of around 400KB. >>>Thus, how does the scheduler and search head work in such scenario at the backend? Does the scheduled SPL keeps scheduler and search head busy for entire 1 hour? Or they are free to run the other SPLs during that span of 1 hour? this looks like a normal search. Please update us... is it a clustered environment? if possible, share us the Splunk search query(pls remove hostnames, ip address, any sensitive info from the search query before posting it here), so that we can fine-tune it. so you can make sure of there will be no negative impacts.  one better idea - did you consider the "Summary indexing" https://docs.splunk.com/Documentation/Splunk/9.1.1/Knowledge/Configuresummaryindexes one more doc for your reference: https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-search-best-practices.html?locale=en_us     >>>And can you share any negative implications on Splunk infrastructure due to the above scheduled search? this looks like a doable job and there should be no negative implications. Splunk should be able to handle this scheduled search easily.  the best approach is to have a UAT / Dev / Test Splunk environment(as much as possible, it should replicate the production Splunk environment) and run this search on the Uat/Dev/Test Splunk first. so that you can get assurance and clear confirmation before running something on the production Splunk. 
Hi @man03359 .. the metricName can be either CPUPercentage or MemoryPercentage.  and then, how do you get the value of either CPUPercentage or MemoryPercentage   or.. if you have the values for ei... See more...
Hi @man03359 .. the metricName can be either CPUPercentage or MemoryPercentage.  and then, how do you get the value of either CPUPercentage or MemoryPercentage   or.. if you have the values for either CPUPercentage or MemoryPercentage.. then you should be able to run: index=idx-cloud-azure "*09406b3b-b643-4e86-876e-4cd5f5a8be57*" | chart count by index, metricName | where CpuPercentage > 85 AND MemoryPercentage > 85  when you run this Search query, do you get results as you expected ah.. if yes, then you can save it as an alert.  Please let us know if this about search works fine.. if its not working, pls update us how to get the values of either cpu or memory percentage. thanks. 
Hi @AL3Z .. in linux, using the find and grep commands.. you can find all the blacklisted lines recursively.     find . -name '*.conf' -exec grep -i 'blacklist' {}\; -print   grep -Ril "text-to-f... See more...
Hi @AL3Z .. in linux, using the find and grep commands.. you can find all the blacklisted lines recursively.     find . -name '*.conf' -exec grep -i 'blacklist' {}\; -print   grep -Ril "text-to-find-here" / i stands for ignore case (optional in your case). R stands for recursive. l stands for "show the file name, not the result itself". / stands for starting at the root of your machine.    
Hello @inventsekar, Thank you for your response and for sharing the details. I searched about healthcheckup and found it gives information related to configuration changes. I check Job Details das... See more...
Hello @inventsekar, Thank you for your response and for sharing the details. I searched about healthcheckup and found it gives information related to configuration changes. I check Job Details dashboard and fetched details of average indexer time which is around 0.9 seconds and time taken by the search to run on each indexer which varied between 0.8 to 1.5 seconds. Thus, can you please share if its suitable to run the scheduled search over Splunk infrastructure or if it can lead to negative implications and issues? The reason to ask is because its scheduled search and cannot afford search failure or infrastructure breakdown, and thus, want to be sure of using the approach. Thank you
Hi,  How we can list out all the apps inputs.conf blacklisted stanzas in the DS ? Coz I'm seeing the command line events are getting blocked in my Environment.. Thanks
This doesn't really answer the question. How about this (to try and clarify what your events mean): Is the count always 1? If so, it appears that average, minimum, maximum and total will always be ... See more...
This doesn't really answer the question. How about this (to try and clarify what your events mean): Is the count always 1? If so, it appears that average, minimum, maximum and total will always be the same number, right? That is, any one of them could be used as the value for the event? If not, which value do you want to use as the value for the event?