All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The only way to retrieve events from Splunk is via a search.
Hi Team,    Currently we are installed the Splunk DB app in Heavy Forwarder, How to Connect this app from Heavy Forwarder to Splunk Cloud Search Head+Indexer server ?
There is no need to rename buckets in this case.
Correct.
What I ended up doing was copying the .spl file here (after creating the Desktop folder) C:\Program Files\SplunkUniversalForwarder\bin\Desktop. Then I copy the applicable Forwarder Management app fo... See more...
What I ended up doing was copying the .spl file here (after creating the Desktop folder) C:\Program Files\SplunkUniversalForwarder\bin\Desktop. Then I copy the applicable Forwarder Management app folders are here: C:\Program Files\SplunkUniversalForwarder\etc\apps.  The best way I found was to compare the folders on your test machine to a computer that you previously set up "correctly," and then copy over any missing folders.  These will generally be the same folders every time.  Then I open an administrator command prompt and run these commands:         cd "C:\Program Files\SplunkUniversalForwarder\bin"         splunk restart Once the last command finishes, you should be good to go. My PDQ deployment looks like this: Step 1: Install Universal Forwarder Step 2: Powershell script       New-Item -ItemType "directory" -Path "c:\\program Files\SplunkUniversalForwarder\bin\Desktop" Step 3: File Copy- Copy .spl file into the folder created in step 2. Step4: File Copy- Copy any needed app folders into here (if multiple app folders need to be copied over, each folder will be its own step in PDQ):                  c:\\Program Files\SplunkUniversalForwarder\etc\apps Step 5: Command Prompt-                 cd "C:\Program Files\SplunkUniversalForwarder\bin"                 splunk restart Hope this is helpful!
Hi Community, Just a quick question to see if this is an issue that anyone else has experienced, do you have problems with the Splunk daemon not communicating properly with the other members of a cl... See more...
Hi Community, Just a quick question to see if this is an issue that anyone else has experienced, do you have problems with the Splunk daemon not communicating properly with the other members of a cluster and deployment because of a NIC teaming issue? Our Splunk (Windows) hosts have servers with dual NIC cards, for whatever reason, in an active-active configuration Splunk does not know how to communicate. The underlying host is up and functional, and the local instance of splunkd appears to be running, but communicating between nodes is nil until the teaming is moved to active-standby or one of the NICs is disabled.
New Splunk instance throwing error after deploying apps. Please help   Root Causes --V events from tracker.log have not been seen for the last 2190 seconds which is more than the red threshold   ... See more...
New Splunk instance throwing error after deploying apps. Please help   Root Causes --V events from tracker.log have not been seen for the last 2190 seconds which is more than the red threshold   LOG --v TIME (happening multiple times a millisecond) -0500 INFO Tailing Processor [MainTrailingThread] adding watch on path:/opt/splunk/*
 Here are some useful references: Get data from TCP and UDP ports - Splunk Documentation Create custom indexes - Splunk Documentation Note the section in the first link, copied below, where it... See more...
 Here are some useful references: Get data from TCP and UDP ports - Splunk Documentation Create custom indexes - Splunk Documentation Note the section in the first link, copied below, where it says that you have to define the inputs stanza attributes so that the data ingested is properly indexed. For the second link, the "Create Event Indexes" section might be orienting for you. ================== Configure a UDP network input This type of input stanza is similar to the TCP type, except that it listens on a UDP network port. If you provide <remote server>, the port that you specify only accepts data from that host. If you don't specify anything for <remote server>, the port accepts data that comes from any host. [udp://<remote server>:<port>] <attrbute1> = <val1> <attrbute2> = <val2> ... The following settings control how the Splunk platform stores the data: Setting Description Default host = <string> Sets the host field to a static value for this stanza. Also sets the host key initial value. Splunk Cloud Platform uses this key during parsing and indexing, in particular to set the host field. It also uses the host field at search time. The <string> is prepended with host::. The IP address or fully-qualified domain name of the host where the data originated. index = <string> Sets the index where Splunk Cloud Platform stores events from this input. The <string> is prepended with index::. main or whatever you set the default index to sourcetype = <string> Sets the sourcetype field for events from this input. Also declares the source type for this data, as opposed to letting Splunk Cloud Platform determine it. This is important both for searchability and for applying the relevant formatting for this type of data during parsing and indexing. Sets the sourcetype key initial value. Splunk Cloud Platform uses the key during parsing and indexing, in particular to set the source type field during indexing. It also uses the source type field that it used at search time. The <string> is prepended with sourcetype::. Splunk Cloud Platform picks a source type based on various aspects of the data. There is no hard-coded default. source = <string> Sets the source field for events from this input. The <string> is prepended with source::. Do not override the source key unless absolutely necessary. The input layer provides a more accurate string to aid in problem analysis and investigation by recording the file from which the data is retrieved. Consider use of source types, tagging, and search wildcards before overriding this value. The input file path. indexQueue Sets where the input processor deposits the events that it reads. Set to parsingQueue to apply the props.conf file and other parsing rules to your data. Set to indexQueue to send your data directly into the index. parsingQueue _rcvbuf = <integer> Sets the receive buffer for the UDP port, in bytes. If the value is 0 or negative, Splunk Cloud Platform ignores the value. 1,572,864 unless the value is too large for an OS. In this case, Splunk Cloud Platform halves the value from this default continuously until the buffer size is at an acceptable level. no_priority_stripping = true | false Sets how Splunk Enterprise handles receiving syslog data. If you set this setting to true, Splunk Cloud Platform does not strip the <priority> syslog field from received events. Depending on how you set this setting, Splunk Cloud Platform also sets event timestamps differently. When set to true, Splunk Cloud Platform honors the timestamp as it comes from the source. When set to false, Splunk Enterprise assigns events the local time. false (Splunk Cloud Platform strips <priority>.) no_appending_timestamp = true | false Sets how Splunk Cloud Platform applies timestamps and hosts to events. If you set this setting to true, Splunk Cloud Platform does not append a timestamp and host to received events. Do not configure this setting if you want to append timestamp and host to received events. false (Splunk Cloud Platform appends timestamps and hosts to events) @jmrubio
Also, make sure that any tagging, eventtypes, and macros are also properly parallel.  Data models can be tricky, there is alot of sometimes subtle things that need to be in place and configured corr... See more...
Also, make sure that any tagging, eventtypes, and macros are also properly parallel.  Data models can be tricky, there is alot of sometimes subtle things that need to be in place and configured correctly. About tags and aliases - Splunk Documentation Tag event types - Splunk Documentation About event types - Splunk Documentation Use search macros in searches - Splunk Documentation
Please give us a mock-up of what your desired output would look like
I'm looking for support on my $xmlregex Blacklist. I have checked as many previous tickets as I can and I'm still stuck. It works when I put the events into regex101 which is why I'm so confused. T... See more...
I'm looking for support on my $xmlregex Blacklist. I have checked as many previous tickets as I can and I'm still stuck. It works when I put the events into regex101 which is why I'm so confused. This is what I have ended up with: [WinEventLog://Microsoft-Windows-PowerShell/Operational] disabled = 0 start_from = oldest renderXml = 1 # 4100 Error Log | 4104 Script Block whitelist = 4104,4100 blacklist = $xmlRegex= $\<EventID\>(?:4104|4100)\<\/EventID\>.*\<Data\sName='ScriptBlockText'\>[\S\s]*[C-Z]:\\Program(?:\sFiles|Data)(\s\(x86\))?\\(?:qualys|Nexthink|uniFLOW\sSmartClient)\\$ blacklist1 = $xmlRegex= $\<EventID\>(?:4104|4100)\<\/EventID\>.*\<Data\sName='ScriptBlockText'\>[\S\s]*[C-Z]:\\Windows\\ccm\\$ I've had to use [\S\s]* because the it's a PowerShell script which has carriage returns in. Any help would be massively appreciated. Thanks! 
Hi @Gauri , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @smanojkumar, Since you are passing arguments using comma delimiter, it seems does not match the macro definition.  Solution depends on your macro search definition,  you can update your macro s... See more...
Hi @smanojkumar, Since you are passing arguments using comma delimiter, it seems does not match the macro definition.  Solution depends on your macro search definition,  you can update your macro search definition for using OR, you can pass multiple values delimited by  " OR " (with spaces)  Like below; </input> <input type="multiselect" token="machine" searchWhenChanged="true"> <label>Machine type</label> <choice value="*">All</choice> <choice value="VDI">VDI</choice> <choice value="Industrial">Industrial</choice> <choice value="Standard">Standard</choice> <choice value="MacOS">MacOS</choice> <choice value="**">DMZ</choice> <default>*</default> <initialValue>*</initialValue> <prefix> (</prefix> <suffix> )</suffix> <delimiter> OR </delimiter> <change> <condition match="$label$ == &quot;*DMZ*&quot;"> <set token="machine_type_dmz">"mcafee_DMZ=DMZ"</set> </condition> <condition match="$label$ != &quot;*DMZ*&quot;"> <unset token="machine_type_dmz"></unset> </condition> </change> </input>
Below query is what I am trying to execute, In the Statistics I am getting the data correctly with correct dates but in graph I am getting same date for both Yesterday & Today. for.eg. Today is 14th... See more...
Below query is what I am trying to execute, In the Statistics I am getting the data correctly with correct dates but in graph I am getting same date for both Yesterday & Today. for.eg. Today is 14th and Yesterday is 13th,  I am getting the date 13th in Visualization for both the days.   index="abc" sourcetype="Prod_logs" (***earliest and latest needs to be derived as per user selection from drop down) | eval "yesterday_datetime_formatted" = strftime(_time,"%Y-%m-%d %H:%M:%S") | stats count(transactionId) AS TotalRequest by "yesterday_datetime_formatted" URI | eval "Uptime SLI" = *****some formula*****, "Latency SLI Yesterday" = *****some formula***** appendcols [search index="abc" sourcetype="Prod_logs" earliest=@d, latest=now | eval "Today_datetime_formatted" = strftime(_time,"%Y-%m-%d %H:%M:%S") | stats count(transactionId) AS TotalRequest by "Today_datetime_formatted" URI | eval "Uptime SLI" = *****some formula*****, "Latency SLI Today" = *****some formula***** ] | fields "today_datetime_formatted" "Latency SLI Today" "yesterday_datetime_formatted" "Latency SLI Yesterday"  
Hi @Ryan.Paredez , Reading this part of the doc, Let me see if I understand how to check. If the checkbox on SDK settings is checked, it means is running on separate Linux machines, otherwise, it is... See more...
Hi @Ryan.Paredez , Reading this part of the doc, Let me see if I understand how to check. If the checkbox on SDK settings is checked, it means is running on separate Linux machines, otherwise, it is running directly on the SAP application. Is it correct my understanding? Thanks in advance
Hi Ryan, I have raised a request. [AppDyanmics Internal Ticket # 388553] Is there any way we can reach out to product team, if they can help out with requirement. Thanks, Jahnavi
After a lot of tries, I finally did it Looks simple when you know what to do Thank you for advertising the substr function The final result is below props.conf [oce_file_rphost] TRANSFORM... See more...
After a lot of tries, I finally did it Looks simple when you know what to do Thank you for advertising the substr function The final result is below props.conf [oce_file_rphost] TRANSFORMS-oce_file_tc0 = oce_file_tc0 LINE_BREAKER = ()\d{2}:\d{2}.\d+-\d+, SHOULD_LINEMERGE = false transform.conf [oce_file_tc0] INGEST_EVAL = _time = strptime("20" + replace(source,".*\\\\(\d{8}).log","\1") + substr(_raw,0,12),"%Y%m%d%H%M:%S.%6Q")
Hi Splunkers,    I would like to pass the label value to the macro based on some condition, when a single value is selected, the value is correctly passed to macro and search is loading the results ... See more...
Hi Splunkers,    I would like to pass the label value to the macro based on some condition, when a single value is selected, the value is correctly passed to macro and search is loading the results but when the multiple values were selected the search is throwing error in macro. </input> <input type="multiselect" token="machine" searchWhenChanged="true"> <label>Machine type</label> <choice value="*">All</choice> <choice value="VDI">VDI</choice> <choice value="Industrial">Industrial</choice> <choice value="Standard">Standard</choice> <choice value="MacOS">MacOS</choice> <choice value="**">DMZ</choice> <default>*</default> <initialValue>*</initialValue> <delimiter>, </delimiter> <change> <condition match="$label$ == &quot;*DMZ*&quot;"> <set token="machine_type_dmz">"mcafee_DMZ=DMZ"</set> </condition> <condition match="$label$ != &quot;*DMZ*&quot;"> <unset token="machine_type_dmz"></unset> </condition> </change> </input> Thanks in Advance!
Please give us a mock-up of what your desired output would look like
How can I get the complete date time format for both the queries in graph,  for eg. index="abc" sourcetype="Prod_logs" | eval "yesterday_datetime_formatted" = strftime(_time,"%Y-%m-%d %H:%M:%S") ... See more...
How can I get the complete date time format for both the queries in graph,  for eg. index="abc" sourcetype="Prod_logs" | eval "yesterday_datetime_formatted" = strftime(_time,"%Y-%m-%d %H:%M:%S") | stats count(transactionId) AS TotalRequest by yesterday_datetime_formatted URI (***earliest and latest needs to be derived as per user selection from drop down) appendcols [search index="abc" sourcetype="Prod_logs" earliest=xxx, latest=now | eval "Today_datetime_formatted" = strftime(_time,"%Y-%m-%d %H:%M:%S") | stats count(transactionId) AS TotalRequest by Today_datetime_formatted URI] | fields "yesterday_datetime_formatted"  "Today_datetime_formatted"