All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Jeffrey.Leedy, Thanks for sharing this info. Seems like it might be a bug, so I would recommend contacting Support and letting them know.  How do I submit a Support ticket? An FAQ  If you ... See more...
Hi @Jeffrey.Leedy, Thanks for sharing this info. Seems like it might be a bug, so I would recommend contacting Support and letting them know.  How do I submit a Support ticket? An FAQ  If you hear back from Support, please report back on this thread.
Hi @Everton.Arakaki, Did you ever reach out to Support and get a reply?
Index-time extractions don't have an equivalent to the max_match option of the rex command.  Consider extracting all users together and then extracting them at search time. [get-users] REGEX = (\d:)... See more...
Index-time extractions don't have an equivalent to the max_match option of the rex command.  Consider extracting all users together and then extracting them at search time. [get-users] REGEX = (\d:)(?<user>.+) FORMAT = users::$1  
We just upgraded today to v9.1.1 and are experiencing this "Loading" issue as well. It's too bad that I didn't see this post, we would have delayed our upgrade in favour of another version. Has any... See more...
We just upgraded today to v9.1.1 and are experiencing this "Loading" issue as well. It's too bad that I didn't see this post, we would have delayed our upgrade in favour of another version. Has anyone received guidance on the fix for this, or is everyone reverting back to a previous version?
Hi @Swathi.Srinivasan, I found this documentation. Please check it out and see if it helps. https://docs.appdynamics.com/appd/23.x/latest/en/database-visibility/monitor-databases-and-database-ser... See more...
Hi @Swathi.Srinivasan, I found this documentation. Please check it out and see if it helps. https://docs.appdynamics.com/appd/23.x/latest/en/database-visibility/monitor-databases-and-database-servers/monitor-database-performance/database-dashboard
Hi @Adiaobong.Odungide, I've reached out to the Docs team looking for some clarification. I will report back when I have more info.
Hi @Nick.Silin, Since you created a Support ticket, can you share the learnings/outcome as a reply here please.
How are you escaping the semicolon in the curl command?  I suspect you need to use a shell escape to keep the shell from thinking the ; separates two commands (and makes Splunk think the search is mi... See more...
How are you escaping the semicolon in the curl command?  I suspect you need to use a shell escape to keep the shell from thinking the ; separates two commands (and makes Splunk think the search is missing a quote). curl ... -d search="search ... | makemv delim=\"~;\" hashes | ..." -d output_mode=csv
Hi Rich, Thank you for your reply. I have run the command and added some extra as I was also after the time, however i think you chop the seconds off. | makeresults | eval StartDateTime="59025.52... See more...
Hi Rich, Thank you for your reply. I have run the command and added some extra as I was also after the time, however i think you chop the seconds off. | makeresults | eval StartDateTime="59025.5249306" ``` Extract the day number (prior to .) ``` | rex field=StartDateTime "(?<JD>\d+)" ``` Shift the day number to base epoch day and convert to seconds ``` | eval time=(JD-40587) * 86400 ``` Convert to text ``` | eval humanTime=strftime(time, "%c") | eval humanTime1=strftime(time, "%Y-%m-%d %H:%M:%S.%1N") | table _time StartDateTime JD time humanTime humanTime1   this gives, 2020-06-25 00:00:00.0 do we have to extract the seconds and add them back to epoch?
Once multi-value fields are expanded, any relationship among them is lost.  They need to be combined into a new field before expansion. ... | eval new_field = mvzip("participants{}.object_value", "p... See more...
Once multi-value fields are expanded, any relationship among them is lost.  They need to be combined into a new field before expansion. ... | eval new_field = mvzip("participants{}.object_value", "participants{}.role") | mvexpand new_field | eval new_field = split(new_field, ",") | eval object_value = mvindex(new_field, 0), role = mvindex(new_field, 1) The mvzip function combines two multi-value fields, separating them with a comma.  The split function later on breaks the field on the comma.  If you have more than two fields to combine, use nested mvzip functions. | eval new_field = mvzip(field1, mvzip(field2, mvzip(field3, field4)))  
The following works fine in the Search app:   ... | makemv delim=";" hashes | ...   The equivalent curl call   curl ... -d search="search ... | makemv delim=\";\" hashes | ..." -d output_... See more...
The following works fine in the Search app:   ... | makemv delim=";" hashes | ...   The equivalent curl call   curl ... -d search="search ... | makemv delim=\";\" hashes | ..." -d output_mode=csv   fails with an "Unbalanced quotes" error.  Delimiters other than ; work fine. I tried to escape the semicolon, use Unicode values, replace the string with a variable, all to no avail. Any suggestions?
This run-anywhere example shows how to convert MJD into a text date. | makeresults | eval StartDateTime="59025.5249306" ``` Extract the day number (prior to .) ``` | rex field=StartDateTime "(?<JD... See more...
This run-anywhere example shows how to convert MJD into a text date. | makeresults | eval StartDateTime="59025.5249306" ``` Extract the day number (prior to .) ``` | rex field=StartDateTime "(?<JD>\d+)" ``` Shift the day number to base epoch day and convert to seconds ``` | eval time=(JD-40587) * 86400 ``` Convert to text ``` | eval humanTime=strftime(time, "%c")
I need to break out log data from two separate multi-value fields into single value fields. Here is what data looks like:  Each line of data from "participants{}.object_value" corresponds to the... See more...
I need to break out log data from two separate multi-value fields into single value fields. Here is what data looks like:  Each line of data from "participants{}.object_value" corresponds to the line in "participants{}.role" and I would like named victims and offender fields.  I dont understand how to use the mv commands to expand the data from two different fields and then combine them into new fields.
Thank you for your reply bowesmana, You are correct that I do not have events that have the startswith and endswith strings with the expanded token string. I am attaching a screenshot of some data.... See more...
Thank you for your reply bowesmana, You are correct that I do not have events that have the startswith and endswith strings with the expanded token string. I am attaching a screenshot of some data. Bear in mind that the Zone can range from 111 through 347. For future expansion reasons I would really like to utilize the  startswith=*$Zoneid_tok$",1,0,192" endswith=*$Zoneid_tok$",0,0,192" if at all possible (where the 1 is the beginning of the event and the 0 is the end of the event).     I truly appreciate any help that you can provide.
Thanks again! the output didn't return any config for this sourcetype on the EC2 node with the Universal Forwarder. As I expected, the forwarding machine is not modifying the log lines then. Un... See more...
Thanks again! the output didn't return any config for this sourcetype on the EC2 node with the Universal Forwarder. As I expected, the forwarding machine is not modifying the log lines then. Unfortunately I don't have access to the Splunk Servers, so can't run the command on the machines. But I checked the Admin UI for both environments. The sourcetype is not defined on either stack. Which makes me believe they should not have any transformation configured on the server side, correct?
Excellent Job!!!
Try this.  The timechart command should fill in empty time slots automatically. | tstats prestats=true count as Total where index="abc" by _time, Type span=1d | timechart span=1d cont=true count as... See more...
Try this.  The timechart command should fill in empty time slots automatically. | tstats prestats=true count as Total where index="abc" by _time, Type span=1d | timechart span=1d cont=true count as Total by Type  
The search head alone can be replaced/rebuilt.
Hi All, i have read similar posts but none that will get me to an answer. My log entry is this; 2023-09-19 16:17:01,306 <OnAirSchedule Service="9008" Status="ON" StartDateTime="59025.5249306"/> ... See more...
Hi All, i have read similar posts but none that will get me to an answer. My log entry is this; 2023-09-19 16:17:01,306 <OnAirSchedule Service="9008" Status="ON" StartDateTime="59025.5249306"/> The StartDateTime is in MJD and i would like to get into human readable format. Below is my search, some regex to start with then the conversion.         | rex "<OnAirSchedule\sService=\"(?<SERVICE>[0-9]+)\"\sStatus\=\"(?<STATUS>.+)\"\sStartDateTime\=\"(?<START_DATE>.+)\"\/\>" | eval jdate=START_DATE,epoch_date=strptime(jdate,"%y%j"),date=strftime(epoch_date,"%Y-%m-%d %H:%M:%S.%1N") | table _time SI_SERVICE_KEY STATUS START_DATE epoch_date date          which was a solution in another question, however i get the date time  2059-01-25 00:00:00.0 I have tried variances of the %y%j to %y.%j and %Y.%j, however these just seem to deal with the date as Julian Date, rather than using the values after the decimal point. This page seems to point to something i am after but it doesnt deal with the full MJD. https://community.splunk.com/t5/Getting-Data-In/Splunk-recognizing-Julian-Date-and-Elapsed-Seconds/m-p/72709 any advice greatly welcomed.
Yes, Splunk merges the settings for inputs.conf from all enabled apps as well as system/default and system/local to arrive at what the complete list of inputs will be.  See the Admin Manual at https:... See more...
Yes, Splunk merges the settings for inputs.conf from all enabled apps as well as system/default and system/local to arrive at what the complete list of inputs will be.  See the Admin Manual at https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Wheretofindtheconfigurationfiles for a description of configuration file precedence. Note that etc/apps/test/inputs.conf will be ignored by Splunk because it is not in a local or default directory.