All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @bowesmana  it worked 
Is there anything wrong with the method you already use?  Or is there a specific effect this is not giving you? If you think it through, trellis has only one single variable for breakdown and displa... See more...
Is there anything wrong with the method you already use?  Or is there a specific effect this is not giving you? If you think it through, trellis has only one single variable for breakdown and display.  All you can do is to change this value.  You search already does that.  If it ain't broken and disclaimers
This question better belongs to Splunk Search.  Anyway, let me generalize your ask: You want the last part of the file path (which is usually file name).  You can use regex.  But a more semantic and ... See more...
This question better belongs to Splunk Search.  Anyway, let me generalize your ask: You want the last part of the file path (which is usually file name).  You can use regex.  But a more semantic and potentially cheaper solution is to use split and mvindex. EventCode=1004 | rex field=_raw "Files: (?<Media_Source>.+?\.txt)" | table Media_Source | eval Filename = mvindex(split(Media_Source, "\"), -1) If you really want to use regex, you can do EventCode=1004 | rex field=_raw "Files: .+?\\(?<Media_Source>[^\\]+\.txt)" | table Media_Source Hope this helps.
| rest /services/authentication/users splunk_server=local | fields roles title type email | rename title as username | search type=SPLUNK
For years I have kept a standalone Splunk Enterprise running on Macbooks.  Typically I keep MacOS in sleep or running mode overnight.  Splunk will run until I reboot (or forced restart).  Never had a... See more...
For years I have kept a standalone Splunk Enterprise running on Macbooks.  Typically I keep MacOS in sleep or running mode overnight.  Splunk will run until I reboot (or forced restart).  Never had a problem. But in the past two weeks, I had two nights during which splunkd on one Macbook entered a "frozen" state in that it will respond to some HTTP queries (e.g., listing dashboards) but all search jobs stopped responding.  I had to either run the Splunk launcher to stop it then relaunch, or reboot. Meanwhile, another Macbook continues to run Splunk fine (in sleep mode). Anyone experience the same?  What could be possible causes?  Neither instance has any recurring jobs or ingestion.  Current version is 9.1.2.  The problematic one runs MacOS 12.7.3/M1. (Last updated some weeks ago.)  The other one runs the same MacOS on Intel.
Sorry been busy with other work Maybe I  am doing this wrong The only way I could figure out how  give a bit more information in  a graph was  to  join the code  and phrase and then use that in t... See more...
Sorry been busy with other work Maybe I  am doing this wrong The only way I could figure out how  give a bit more information in  a graph was  to  join the code  and phrase and then use that in the Split By in the Trellis | rex field=Error_Text".*:\s(?P<Code>\d{3})" | lookup error_codes Code OUTPUT Phrase | eval CodePhrase = Code+" -- "+Phrase When I use the Drill down it using joined  "codephrase"  field. So I am wondering if there is another way to add the text  to  the graphs
Let me try to answer two separate questions.  I think the question about "modified time" is in regard to file system record.  Is this correct?  Yes, file system modified time is updated. Splunk 9 ad... See more...
Let me try to answer two separate questions.  I think the question about "modified time" is in regard to file system record.  Is this correct?  Yes, file system modified time is updated. Splunk 9 added a Update: If you install Chris Younger's Config Explorer, you will find sourcetype config_explorer in _internal that includes the information you want.  For example, you can do   index = _internal sourcetype="config_explorer" item="./etc/*/lookups/*" | stats max(_time) as _time by item   I don't think such information is retained before 9 without an app.
Hi, Have anyone faced this issue where you received a Unauthorized 401 error response from ServiceNow? The scenario is as below. We are using a AD service account userA to interact with Service... See more...
Hi, Have anyone faced this issue where you received a Unauthorized 401 error response from ServiceNow? The scenario is as below. We are using a AD service account userA to interact with ServiceNow for incident creation . On Splunk Side, we are using Basic Auth. On AD, user account is set to never expired.   So far below we have checked the service account status. No changes was made but the issue was sudden. Ran the query  >index=_internal sourcetype="ta_snow_ticket host IN ( search head) Above query was the one, we saw the Return code is 401 (Unauthorized) What else can be checked? As of now, we are planning to reset the service account password and try again. But if it works the issue is finding what cause the password to be changed when it have been set to never expires.  
Hi All,   I've been using the Addon Builder to create some modular inputs and associated AddOn configuration pages including account names. I've also shoehorned in some custom search commands which... See more...
Hi All,   I've been using the Addon Builder to create some modular inputs and associated AddOn configuration pages including account names. I've also shoehorned in some custom search commands which use the automatically generated configuration settings from the Web GUI. One thing I noticed was when you deploy the app to a search head cluster Addon Configuration changes do not migrate across the cluster. I thought it used REST endpoints to make these changes so they should replicate across the cluster? Might be worth putting something in the documentation so users are aware that the apps Addonbuilder makes will not be fully functional in a search head cluster environment.   The only solution we've found is to manually log into each Search Head in the cluster and make the changes on each one individually.
Old post but I just noticed the addon builder doesn't support config or account changes in a search head cluster environment too. Did you ever find a solution? I'm sure I've hand crafted addons... See more...
Old post but I just noticed the addon builder doesn't support config or account changes in a search head cluster environment too. Did you ever find a solution? I'm sure I've hand crafted addons that were capable of having config changes replicated across the cluster.
Hi @Hardy_0001 , I am still facing with this issue. Could you please help me share your solution?
I have a suspicion that you misspelled either account_id or aws_account_id in the macro because the way you presented, the resultant subsearch is NOT ().  Are you sure you copied the above search ver... See more...
I have a suspicion that you misspelled either account_id or aws_account_id in the macro because the way you presented, the resultant subsearch is NOT ().  Are you sure you copied the above search verbatim into index search and you get the correct result that is NOT the same as using the macro? Further, which fieldname exists in actual data? aws_account_id or account_id?  For example, if account_id exists AND if you intend to match account_id in index data with "Account ID" in the lookup, your macro should be something like search [inputlookup Account_Owners.csv |rename "Account ID" as account_id |search Environment IN (PROD, UAT, ) |table account_id] Hope this helps.
@tatdat171  are you able to resolve this issue? checking because we are experiencing same issue.
If all you need to get alert going is to remove decrypted field, it could be an undocumented security feature in alert actions.  But it could also be a DECRYPT2 feature.  Ask app developers or consul... See more...
If all you need to get alert going is to remove decrypted field, it could be an undocumented security feature in alert actions.  But it could also be a DECRYPT2 feature.  Ask app developers or consult its documentation.
You cannot.  Splunk does not interact with external Web site directly. You can create a custom command to do so.  See Create a custom search command for Splunk Cloud Platform or Splunk Enterprise.  ... See more...
You cannot.  Splunk does not interact with external Web site directly. You can create a custom command to do so.  See Create a custom search command for Splunk Cloud Platform or Splunk Enterprise.  Hope this helps.
Standard SPL doesn't have a curl command.  You should ask the developer who developed the app that gives you curl, or consult its manual to find out the correct syntax and any limitations it may have... See more...
Standard SPL doesn't have a curl command.  You should ask the developer who developed the app that gives you curl, or consult its manual to find out the correct syntax and any limitations it may have. (Given that "Apps and Add-ons" forum is gone from this board, Splunk Dev might be a successor.) Also, you talk about "dynamic url" as if it is a Splunk feature.  It is not.  Maybe you can illustrate your SPL snippet so volunteers can understand precisely what you are referring to?
You are correct that join is slow and easily hits limit.  But how is stats with coalesce not actually connecting the events / data together?  What exactly do you get?  Given that your mock code uses ... See more...
You are correct that join is slow and easily hits limit.  But how is stats with coalesce not actually connecting the events / data together?  What exactly do you get?  Given that your mock code uses mock field names, are you sure you typed field names correctly in coalesce and group by? What is the output from the following test? index=index_1 (sourcetype=source_1 field_D="Device" field_E=*Down* OR field_E=*Up*) OR (source="source_2" earliest=-30d@d latest=@m) | eval field_AB=coalesce(field_A, field_B) | where isnotnull(field_AB) | table _time source field_AB If field_A exists in every event from source_1 and likewise field_B in source_2, the test should list every event from both sources.  Do you get that? If all spellings are correct, you can rename your actual field names to field_A, field_B, etc., then post sample data from the two sources (anonymize as needed) so volunteers will have a basis to help in concrete ways. This is a long way to say that stats with coalesce is the correct code.
Again, thank you for including data emulation in problem presentation.  I tried to be more semantic but the syntax got more and more tangled.  So this time, I will just use string manipulation.   |... See more...
Again, thank you for including data emulation in problem presentation.  I tried to be more semantic but the syntax got more and more tangled.  So this time, I will just use string manipulation.   | timechart span=1w@w first(MathGrade) as MathGrade, first(EnglishGrade) as EnglishGrade, first(ScienceGrade) as ScienceGrade by Student useother=f limit=0 | eval _time = strftime(_time,"%m/%d/%Y") | fields - _span _spandays | transpose 0 header_field=_time column_name=Grade | eval Grade = split(Grade, ": ") | eval Student = mvindex(Grade, 1), Grade = mvindex(Grade, 0) | table Student Grade * | sort Student   Here, I included snap-to  @w which you asked about in the other question.  Your emulated data gives Student Grade 02/04/2024 02/11/2024 02/18/2024 02/25/2024 Student1 EnglishGrade 10 6 10 7 Student1 MathGrade 10 6 10 7 Student1 ScienceGrade 10 6 10 7 Student2 EnglishGrade 9 8 9 6 Student2 MathGrade 9 8 9 6 Student2 ScienceGrade 9 8 9 6 Hope this helps.
First, thank you for including data emulation in the question.  There are two aspects that Splunk chooses to address separately.  First is calendar time anchor. (I think Splunk calls this "snap-to"; ... See more...
First, thank you for including data emulation in the question.  There are two aspects that Splunk chooses to address separately.  First is calendar time anchor. (I think Splunk calls this "snap-to"; see Date and time format variables.)  If you use @w with the span attribute, timechart will snap to beginning of the week, which is deemed to be start of Sunday on CE calendar, whichever timezone the search head uses.  For example, using your emulation with   | timechart span=1w@w first(MathGrade) by Student useother=f limit=0   gives _time Student1 Student2 2024-02-04 10 9 2024-02-11 6 8 2024-02-18 10 9 2024-02-25 7 6 Now, your ask is to begin a week on an arbitrary day.  Given timechart doesn't support this, the hack is to shift time back and forth.  For example, 02/09/2024 is a Friday, or day 5 in Splunk's dow count.   | eval _time = relative_time(_time, "-5d@d") | timechart span=1w@w first(MathGrade) by Student useother=f limit=0 | eval _time = relative_time(_time, "+5d@d")   Using the same emulation, the above gives _time Student1 Student2 2024-02-02 10 9 2024-02-09 8 6 2024-02-16 9 8 2024-02-23 5 9 Hope this helps.
How about something like this | makeresults | eval start= strptime("02-01-2024", "%m-%d-%Y") | eval today=now() | eval time_difference=floor((today-start)/(60*60*24)) | eval mod_val=time_differe... See more...
How about something like this | makeresults | eval start= strptime("02-01-2024", "%m-%d-%Y") | eval today=now() | eval time_difference=floor((today-start)/(60*60*24)) | eval mod_val=time_difference % 28 | eval days_to_patch=28-mod_val