All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks a lot,   a bonus question for  a bonus karma point... I want to create a stacked column chart with for each job name the distinct values fo duration. What is the exact syntax to do to achi... See more...
Thanks a lot,   a bonus question for  a bonus karma point... I want to create a stacked column chart with for each job name the distinct values fo duration. What is the exact syntax to do to achive that?
Hello, I would like to create a compliance user by allowing read only access to all knowledge objects and dashboards in our Splunk environment. I have allowed read permissions on all apps to that sp... See more...
Hello, I would like to create a compliance user by allowing read only access to all knowledge objects and dashboards in our Splunk environment. I have allowed read permissions on all apps to that specific role however, me as admin role can view almost double the amount of Alerts, Reports and Dashboards as the compliance role. What could be the cause here? and what could I be missing? Do I need to edit every single knowledge object and dashboard to allow permission for said role? Is there an easier method of doing this if so? Thanks, Regards,
@gcusello , Tried this Search not giving the desire result | eval from_epoch = strptime(from, "%m/%d/%Y %I:%M %p") | eval till_epoch = strptime(till, "%m/%d/%Y %I:%M %p") | eval diff_seconds = ... See more...
@gcusello , Tried this Search not giving the desire result | eval from_epoch = strptime(from, "%m/%d/%Y %I:%M %p") | eval till_epoch = strptime(till, "%m/%d/%Y %I:%M %p") | eval diff_seconds = till_epoch - from_epoch | eval diff_years = floor(diff_seconds / (365.25*24*60*60)) | eval remaining_seconds = diff_seconds - (diff_years * 365.25*24*60*60) | eval diff_days = floor(remaining_seconds / (24*60*60)) | eval remaining_seconds = remaining_seconds - (diff_days * 24*60*60) | eval diff_hours = floor(remaining_seconds / (60*60)) | eval diff_minutes = floor((remaining_seconds - (diff_hours * 60*60)) / 60) | eval duration = diff_years . " year" . if(diff_years != 1, "s", "") . " " . diff_days . " day" . if(diff_days != 1, "s", "") . " " . diff_hours . " hour" . if(diff_hours != 1, "s", "") . " " . diff_minutes . " min" output: 11/28/2023 05:10 PM 11/28/2024 05:40 PM 1 year 0 days 18 hours 30 min
@gcusello , duration is showing  like 366+00:30:00.000000 how we can change it  eg: 0 years 181 days 23 hours 30 min  
Hi @AL3Z , let me understand: you want the difference between two dates in format days, hours, minutes and seconds, is it correct? if this is your requirement, you could try something like this: <... See more...
Hi @AL3Z , let me understand: you want the difference between two dates in format days, hours, minutes and seconds, is it correct? if this is your requirement, you could try something like this: <your_search> | eval duration=tostring(strptime(till,"%m/%d/%Y %I:%M %p")-strptime(from,"%m/%d/%Y %I:%M %p"),"duration") | table from till duration Ciao. Giuseppe
Hi, How we can find the difference of these two date difference in year days hour min  from till 11/28/2023 03:38 PM 11/28/2024 04:08 PM
hello, i'm running a cisco sdwan fabric and i was curious if i can send data directly to cloud_splunk.  according to Cisco Catalyst SD-WAN Splunk Integration User Guide i should select for data inp... See more...
hello, i'm running a cisco sdwan fabric and i was curious if i can send data directly to cloud_splunk.  according to Cisco Catalyst SD-WAN Splunk Integration User Guide i should select for data input tcp/udp 514 syslog, but i don't have this option under data_inputs in cloud_splunk. is there a way to send the logs to cloud_splunk or i need a local installed instance of splunk? br, bazil
I need help in making the pie chart and put two pie's in it with one as success_transaction and other as error_transaction, when i am doing it, it is showing consolidated data with service name, also... See more...
I need help in making the pie chart and put two pie's in it with one as success_transaction and other as error_transaction, when i am doing it, it is showing consolidated data with service name, also i would need to put count inside those pie's and provide a Y Axis title wiht those fields names, i was using this query, please help me into solving this problem index="aio" Environment="POD" Appid="APP-53" ("Invokema : call() :") OR ("exception" OR level="ERROR" NOT "NOT RACT" NOT H0 NOT "N is null" NOT "[null" NOT "lid N") | rex field=_raw "00\s(?<service_name>\w+)-pod" | rex field=_raw "]\s(?<valid_by>.*?)\s\:\scall()" | eval success_flag = if(valid_by="Invokema", 1,0) | fillnull validate_by value=null | fillnull service_name value=nservice | eval error_flag = if(valid_by="null", 1,0) | stats sum(success_flag) as Success_Transaction, sum(error_flag) as Error_Transaction by service_name   you help will be appreciated.
Thank you for those details. I think I understand the problem better now. I think you have 2 main options to get missing dimensions like "azure_resource_name". 1) enable the Microsoft Azure Cloud ... See more...
Thank you for those details. I think I understand the problem better now. I think you have 2 main options to get missing dimensions like "azure_resource_name". 1) enable the Microsoft Azure Cloud Integration (Data Management -> Cloud integrations -> Microsoft Azure) 2) If you want to use strictly OTel collector, you may be able to add a processor to populate the "azure_resource_name" from the "host.name" value. For example: processors:   resource/add_azure_resource_name:     attributes:       - action: upsert          key: azure_resource_name          from_attribute: host.name Then be sure to add "resource/add_azure_resource_name" to the pipeline service->pipelines->metrics->processors
  Yes, the VMs are running on Azure VMs and the collectors are running on the same VM. As per suggestion, i tried moving the "system" to the front in the list, But there is no effect, The VM are st... See more...
  Yes, the VMs are running on Azure VMs and the collectors are running on the same VM. As per suggestion, i tried moving the "system" to the front in the list, But there is no effect, The VM are still listed as a long list. Kindly help    
Re up, i also facing the same issues  
thank you for teaching me.
Note that the script does not handle the event stream done tag or the event unbroken attribute. I did not observe those in testing, but I did not test extensively.
Hi @SplunkExplorer, I had a bit of fun with this today. Splunk ingests Windows event log events using a modular input, splunk-winevtlog.exe, which stream events in xml mode to stdout. The event str... See more...
Hi @SplunkExplorer, I had a bit of fun with this today. Splunk ingests Windows event log events using a modular input, splunk-winevtlog.exe, which stream events in xml mode to stdout. The event stream looks like this: <stream> <event stanza="WinEventLog://..."> <time>...</time> <data>...</data> <source>...</source> <sourcetype>...</source> <index>...</index> </event> <event> ... </event> <event> ... </event> ... </stream>   The schema and output behavior are documented at https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsscript/#XML-mode. The startup path for splunk-winevtlog.exe is stored in %SPLUNK_HOME%\bin\scripts\splunk-winevt.log.path. With knowledge of these two things in hand, we can begin the work of writing a wrapper for splunk-winevtlog.exe that transforms the output of the command. I've written the first interation of a PowerShell 5.1 / .NET Framework 4.0 script to read the output of splunk-winevtlog.exe, modify the data and sourcetype elements, and write the new stream to stdout. Before writing the script, though, the translation from WinEventLog and XmlWinEventLog to JSON must be defined. In the case of WinEventLog, it's fairly straightforward to skip the timestamp and convert the key-value pairs into JSON keys. The following input (truncated for brevity): 12/03/2023 08:28:32 PM LogName=Security EventCode=4688 EventType=0 ComputerName=host1 SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=123 Keywords=Audit Success TaskCategory=Process Creation OpCode=Info Message=A new process has been created. Creator Subject: Security ID: host1\user   becomes: {"LogName":"Security","EventCode":"4688","EventType":"0","ComputerName":"host1","SourceName":"Microsoft Windows security auditing.","Type":"Information","RecordNumber":"123","Keywords":"Audit Success","TaskCategory":"Process Creation","OpCode":"Info","Message":"A new process has been created.\nCreator Subject:\n\tSecurity ID:\t\thost1\\user"}   In the case of XmlWinEventLog, we need to choose an object format, as compound elements, element values, and attributes don't translate directly arrays and keys. For the example, I handle EventData as an array, attributes as keys, and element values as a key named Value. Empty elements, <Foo/> are dropped. Elements with empty values, <Foo></Foo>, are retained. The following input (truncated for brevity): <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3b0328c30d}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2023-12-04T00:28:32.0000000Z'/><EventRecordID>1234</EventRecordID><Correlation/><Execution ProcessID='1234' ThreadID='1234'/><Channel>Security</Channel><Computer>host1</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>host1\user</Data></EventData></Event>   becomes: {"Event":{"System":{"Provider":{"Name":"Microsoft-Windows-Security-Auditing","Guid":"{54849625-5478-4994-a5ba-3e3b0328c30d}"},"EventID":{"Value":"4688"},"Version":{"Value":"2"},"Level":{"Value":"0"},"Task":{"Value":"13312"},"Opcode":{"Value":"0"},"Keywords":{"Value":"0x8020000000000000"},"TimeCreated":{"SystemTime":"2023-12-04T00:28:32.0000000Z"},"EventRecordID":{"Value":"123"},"Execution":{"ProcessID":"1234","ThreadID":"1234"},"Channel":{"Value":"Security"},"Computer":{"Value":"host1"}},"EventData":[{"Name":"SubjectUserSid","Value":"host1\\user"}]}}   %SPLUNK_HOME%\bin\scripts\splunk-winevt.log.path $SPLUNK_HOME\bin\scripts\splunk-winevtlog.cmd   %SPLUNK_HOME%\bin\scripts\splunk-winevtlog.cmd @"%SPLUNK_HOME%\bin\splunk-winevtlog.exe" %* | "%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -ExecutionPolicy RemoteSigned -File "%SPLUNK_HOME%\bin\scripts\ConvertTo-JsonWinEventLog.ps1"   %SPLUNK_HOME%\bin\scripts\ConvertTo-JsonWinEventLog.ps1   Add-Type -AssemblyName "System.Web" $xmlReaderSettings = New-Object -TypeName "System.Xml.XmlReaderSettings" $xmlReaderSettings.ConformanceLevel = [System.Xml.ConformanceLevel]::Fragment $xmlReaderSettings.IgnoreWhitespace = $true $xmlStreamReader = [System.Xml.XmlReader]::Create([System.Console]::In, $xmlReaderSettings) $xmlWriterSettings = New-Object -TypeName "System.Xml.XmlWriterSettings" $xmlWriterSettings.ConformanceLevel = [System.Xml.ConformanceLevel]::Fragment $xmlWriterSettings.Indent = $true $xmlWriterSettings.IndentChars = "" $xmlWriter = [System.Xml.XmlTextWriter]::Create([System.Console]::Out, $xmlWriterSettings) while ($xmlStreamReader.Read()) { switch -Exact ($xmlStreamReader.NodeType) { "Element" { switch ($xmlStreamReader.Name) { "event" { # expected fragment: # # <event stanza="WinEventLog://..."> # <time>...</time> # <data>...</data> # <source>...</source> # <sourcetype>...</sourcetype> # <index>...</index> # </event> # write the <event> element $xmlWriter.WriteStartElement($xmlStreamReader.Name) # write the stanza attribute if ($xmlStreamReader.HasAttributes) { while ($xmlStreamReader.MoveToNextAttribute()) { $xmlWriter.WriteAttributeString($xmlStreamReader.Name, $xmlStreamReader.Value) } $result = $xmlStreamReader.MoveToElement() } # read and write the <time> element $result = $xmlStreamReader.Read() $xmlWriter.WriteStartElement($xmlStreamReader.Name) $result = $xmlStreamReader.Read() $xmlWriter.WriteValue($xmlStreamReader.Value) $result = $xmlStreamReader.Read() $xmlWriter.WriteEndElement() # read and store the <data> element $result = $xmlStreamReader.Read() $result = $xmlStreamReader.Read() $data = $xmlStreamReader.Value $result = $xmlStreamReader.Read() # read and store the <source> element $result = $xmlStreamReader.Read() $result = $xmlStreamReader.Read() $source = $xmlStreamReader.Value $result = $xmlStreamReader.Read() # read and store the <sourcetype> element $result = $xmlStreamReader.Read() $result = $xmlStreamReader.Read() $sourcetype = $xmlStreamReader.Value $result = $xmlStreamReader.Read() # modify and write the <data> element based on the <sourcetype> value # modify the sourcetype value if ($sourcetype.startsWith("WinEventLog:")) { $json = "{" $stringReader = New-Object -TypeName "System.IO.StringReader" @($data) # skip timestamp $result = $stringReader.ReadLine() while ($line = $stringReader.ReadLine()) { $keyvalue = $line.Split("=", 2) $key = $keyvalue[0] $value = $keyvalue[1] switch ($key) { "Message" { $json += "`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($key) + "`":`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($value) + [System.Web.HttpUtility]::JavaScriptStringEncode($stringReader.ReadToEnd()) + "`"," } default { $json += "`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($key) + "`":`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($value) + "`"," } } } $json += "}" $data = $json.Replace(",]", "]").Replace(",}", "}") $sourcetype = "JsonWinEventlog" } elseif ($sourcetype.startsWith("XmlWinEventLog:")) { $json = "{`"Event`":{" $stringReader = New-Object -TypeName "System.IO.StringReader" @($data) $xmlEventReader = [System.Xml.XmlReader]::Create($stringReader, $xmlReaderSettings) $result = $xmlEventReader.MoveToContent() while ($xmlEventReader.Read()) { switch -Exact ( $xmlEventReader.NodeType) { "Element" { if ($xmlEventReader.HasAttributes -or -not $xmlEventReader.IsEmptyElement) { switch -Exact ($xmlEventReader.Name) { "EventData" { $json += "`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($xmlEventReader.Name) + "`":[" } "Data" { $json += "{" while ($xmlEventReader.MoveToNextAttribute()) { $json += "`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($xmlEventReader.Name) + "`":`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($xmlEventReader.Value) + "`"," } $result = $xmlEventReader.MoveToElement() } default { $json += "`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($xmlEventReader.Name) + "`":{" if ($xmlEventReader.HasAttributes) { while ($xmlEventReader.MoveToNextAttribute()) { $json += "`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($xmlEventReader.Name) + "`":`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($xmlEventReader.Value) + "`"," } $result = $xmlEventReader.MoveToElement() } if ($xmlEventReader.IsEmptyElement) { $json += "}," } } } } } "Text" { $json += "`"Value`":`"" + [System.Web.HttpUtility]::JavaScriptStringEncode($xmlEventReader.Value) + "`"" } "EndElement" { if ($xmlEventReader.Name -eq "EventData") { $json += "]," } else { $json += "}," } } } } $json += "}" $data = $json.Replace(",]", "]").Replace(",}", "}") $sourcetype = "JsonXmlWinEventlog" } # write the <data> element $xmlWriter.WriteStartElement("data") $xmlWriter.WriteValue($data) $xmlWriter.WriteEndElement() # write the <source> element $xmlWriter.WriteStartElement("source") $xmlWriter.WriteValue($source) $xmlWriter.WriteEndElement() # write the <sourcetype> element> $xmlWriter.WriteStartElement("sourcetype") $xmlWriter.WriteValue($sourcetype) $xmlWriter.WriteEndElement() # continue } default { $xmlWriter.WriteStartElement($xmlStreamReader.Name) if ($xmlStreamReader.HasAttributes) { while ($xmlStreamReader.MoveToNextAttribute()) { $xmlWriter.WriteAttributeString($xmlStreamReader.Name, $xmlStreamReader.Value) } $result = $xmlStreamReader.MoveToElement() } } } } "Text" { $xmlWriter.WriteValue($xmlStreamReader.Value) } "EndElement" { $xmlWriter.WriteEndElement() } } $xmlWriter.Flush() }   Sample inputs.conf with renderXml = false: [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest renderXml = false   Sample inputs.conf with renderXml = true: [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest renderXml = true suppress_text = true suppress_sourcename = true suppress_keywords = true suppress_type = true suppress_task = true suppress_opcode = true   Tying everything together generates events with either sourcetype=JsonWinEventLog or sourcetype=JsonXmlWinEventLog depending on the original sourcetype. As @PickleRick noted, your next task would be re-creating the knowledge objects provided by Splunk Add-on for Windows. Do try this at home! But don't try it in production without sufficient testing. If you're looking for a simpler solution, there's at least one very popular third-party product that handles transformations like this in manageable pipelines.
Hi @Splunkerninja, I answered a similar question earlier this year: https://community.splunk.com/t5/Other-Usage/How-to-change-treemap-default-colors-for-categorical-colorMode/m-p/639216/highlight/tr... See more...
Hi @Splunkerninja, I answered a similar question earlier this year: https://community.splunk.com/t5/Other-Usage/How-to-change-treemap-default-colors-for-categorical-colorMode/m-p/639216/highlight/true#M119 
Hi, I don't have a solr instance to test, but here are some thoughts that might help you troubleshoot. https://docs.splunk.com/observability/en/gdi/monitors-hosts/solr.html https://github.com/si... See more...
Hi, I don't have a solr instance to test, but here are some thoughts that might help you troubleshoot. https://docs.splunk.com/observability/en/gdi/monitors-hosts/solr.html https://github.com/signalfx/collectd-solr I'm not sure that this receiver supports basic auth with username and password. It looks like it might support certificate based authentication, so that is something you might want to try. In the sample you shared, you probably don't want any of the configuration in the "exporters" section. Anything you try with configuration of the solr receiver will be in the "receivers" section. Also, don't forget to put the receiver in your metrics pipeline later in the config file. The sample also contains some unconventional indentation--please remember that the yaml configs are indentation sensitive--so you'll need to have every line indented with spaces just right.
Hi, You may want to try creating a custom detector. Can you try this and see if it's what you want? Go to Alerts & Detectors section. Click "New Detector" and select "Custom Detector".   Give t... See more...
Hi, You may want to try creating a custom detector. Can you try this and see if it's what you want? Go to Alerts & Detectors section. Click "New Detector" and select "Custom Detector".   Give the new detector a name and click "create alert rule". For your alert signal, choose "synthetics.run.uptime.percent" Add a filter for this signal "test_id" and specify the ID of your test (tip: the ID of your test is visible in the URL when you are viewing it) Click on "add analytics" and select "mean" and "mean aggregation". Click anywhere outside of the box to clear out the "group by" box. In the time window, choose something appropriate such as "-1h" for "past 1 hour" Proceed to "alert condition" and select "static threshold". Proceed to "alert settings" and choose "below" and "98". The UI will so some historical analysis and show you how many times the alert would have fired over your selected time period. From here you can proceed by customizing your alert message and alert recipients and activating the alert if everything looks good.
Hi, Is this VM in a public cloud? The reason I ask is because the default agent_config.yaml file has a resource detection configuration that allows names to be set by the cloud provider so it's poss... See more...
Hi, Is this VM in a public cloud? The reason I ask is because the default agent_config.yaml file has a resource detection configuration that allows names to be set by the cloud provider so it's possible the host name is being set that way. If you think this might be the case, you could experiment by moving "system" to the front of that list and seeing if the host.name changes after that. I guess I should also confirm that you're running the OTel collector on this VM--correct? (e.g., the metrics are not coming from a native cloud integration and are being collected by OTel collector running on that VM?)  
They did a real great mess, after 7.0, and some 8.x release Also with false in optimize_ui_prefs_performance, i'm now on 8.2.12 version, 1) optimize_ui_prefs_performance to true destroyes all old u... See more...
They did a real great mess, after 7.0, and some 8.x release Also with false in optimize_ui_prefs_performance, i'm now on 8.2.12 version, 1) optimize_ui_prefs_performance to true destroyes all old users customization on search tab ... also with optimize_ui_prefs_performance to false, 2) new ui-prefs.conf are not created anymore, only old ui-prefs are managed 3) also etc/users/launcher/local/ui-prefs.conf to remove "Explore Splunk Enterprise" banner has gone away! 4) users can't change Alerts/Reports/Dashboards object view modality (general/owner/app), since it's defaulted and reverted back to "All" next time you load the page!!!     5) seems ui-prefs is right managed only in "app/search/search|alerts|reports|dashboards" (default search App) This is really a great mess!!! We have had many users complain about this poor UI management!!!  
@bowesmana I understand the technique now! After some tweaking, I now get the expected results, some important points for anyone who is reading this... group by parent table so that all the child ... See more...
@bowesmana I understand the technique now! After some tweaking, I now get the expected results, some important points for anyone who is reading this... group by parent table so that all the child records are in multi-value columns find the multi-value index that matches remove the non-matching multi-value records list() does not dedup results, this is needed when filtering the mv results by index | convert mktime(_time) as epoch | inputlookup append=t Request_admin_access.csv ``` need the same use column name for stats to group on ``` | eval userForMatching=coalesce(normalisedUserName, normalisedReporterName) ``` group by events, so that all the possible child requests are in mv columns ``` | stats list(epoch) as epoch list(key) as key values(reporterName) as reporterName values(reporterEmail) as reporterEmail list(summary) as summary list(changeStartDate) as changeStartDate list(changeEndDate) as changeEndDate values(user) as user values(os) as os values(clientName) as clientName values(clientAddress) as clientAddress values(signature) as signature values(logonType) as logonType by host userForMatching ``` expand the events ``` | mvexpand epoch ``` find the mv index where event._time between request.start and request.end dates ``` | eval isAfterStart=mvmap(changeStartDate, if(epoch>=changeStartDate, 1, 0)) | eval isBeforeEnd=mvmap(changeEndDate, if(epoch<changeEndDate, 1, 0)) | eval idx=mvfind(mvzip(isAfterStart, isBeforeEnd), "1,1") | rename epoch as _time ``` filter to just the matching request ``` | eval key=mvindex(key, idx) | eval reporterName=if(isnull(idx),"",reporterName) | eval reporterEmail=if(isnull(idx),"",reporterEmail) | eval summary=mvindex(summary, idx) | eval changeStartDate=mvindex(changeStartDate, idx) | eval changeEndDate=mvindex(changeEndDate, idx) ``` human readable times ``` | convert ctime(changeStartDate) timeformat="%F %T" | convert ctime(changeEndDate) timeformat="%F %T" | table _time os host user clientName clientAddress signature logonType key reporterName reporterEmail summary changeStartDate changeEndDate | sort -_time Many thanks