All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Check out the walklex command.  It will tell you which fields are indexed, as opposed to all the fields in an index.  See https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Walklex |... See more...
Check out the walklex command.  It will tell you which fields are indexed, as opposed to all the fields in an index.  See https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Walklex | walklex type=field index=foo
There's not really a good way to get fields for an index, but you can do it like this using the output from fieldsummary as a subsearch index=* [ search index=* | fieldsummary maxvals=1 | fields... See more...
There's not really a good way to get fields for an index, but you can do it like this using the output from fieldsummary as a subsearch index=* [ search index=* | fieldsummary maxvals=1 | fields field | eval {field}=if(match(field, "_time"), null(), "*") | fields - field ] | foreach * [ eval fields=mvappend(fields, if(isnotnull('<<FIELD>>'), "<<FIELD>>", null())) ] | stats values(fields) as fields by index It's not particularly efficient, but will give you a idea - the _time check is because search won't like earliest_time="*" in the subsearch output.  
THIS WORKS!!!!!!!!!! Thanks so much!!!!
Actually, I made a mistake when I emulate data. (I had a spurious "match" field in emulation which your index subsearch will not give.)   Here is the correct solution:   index="web_index" [| i... See more...
Actually, I made a mistake when I emulate data. (I had a spurious "match" field in emulation which your index subsearch will not give.)   Here is the correct solution:   index="web_index" [| inputlookup URLs.csv | fields kurl | rename kurl as url] | append [inputlookup URLs.csv | fields kurl] | eval match = if(isnull(url), 0, 1) | eval url = coalesce(url, kurl) | stats sum(match) as count by url   Here is the corrected emulation   | makeresults | eval url = mvappend("google.com", "foo.com", "bar.com", "google.com") | eval index = "web_index" | mvexpand url | search * [ inputlookup URLs.csv | fields kurl | rename kurl as url] ``` the above emulates index="web_index" [| inputlookup URLs.csv | fields kurl | rename kurl as url] ```   This gives _time index url 2023-09-20 16:20:05 web_index google.com 2023-09-20 16:20:05 web_index google.com (If your results do not meet the requirement, it would be useful to compare real data with emulation.)  Combined simulation is then   | makeresults | eval url = mvappend("google.com", "foo.com", "bar.com", "google.com") | mvexpand url | lookup URLs.csv kurl as url output kurl as match | where isnotnull(match) | fields - match ``` the above emulates index="web_index" [| inputlookup URLs.csv | fields kurl | rename kurl as url] ``` | append [inputlookup URLs.csv | fields kurl] | eval match = if(isnull(url), 0, 1) | eval url = coalesce(url, kurl) | stats sum(match) as count by url   I get url count google.com 2 splunk.com 0 youtube.com 0   However, I still don't understand why the first option (stats before append) would not give you the correct output.  Here's my full emulation:   | makeresults | eval url = mvappend("google.com", "foo.com", "bar.com", "google.com") | mvexpand url | lookup URLs.csv kurl as url output kurl as match | where isnotnull(match) | fields - match ``` the above emulates index="web_index" [| inputlookup URLs.csv | fields kurl | rename kurl as url] ``` | stats count by url | append [inputlookup URLs.csv | fields kurl | rename kurl AS url] | stats sum(count) as count by url | fillnull count ``` or you can omit this and leave nonexistent to show null ```   Again It would help if you can run data part of emulation and let me know the difference with real data.
The linked article by @dmacintosh_splu shows you how to create the relative comparable time for the same period in the previous year using a dummy search. To make the 1 year calculation, I would do ... See more...
The linked article by @dmacintosh_splu shows you how to create the relative comparable time for the same period in the previous year using a dummy search. To make the 1 year calculation, I would do <search> <query> | makeresults | addinfo | eval prev_year_earliest=relative(info_min_time, "-1y") | eval prev_year_latest=relative(info_max_time, "-1y") | fields prev_* </query> <done> <set token="prev_year_earliest">$result.prev_year_earliest$</eval> <set token="prev_year_latest">$result.prev_year_latest$</eval> </done> </search> what is that you can't do specifically? Do you want a single panel to show both years on a timechart - when you say trend, do you mean a straight line indicating direction or comparative data points for the previous year? If you want a single panel showing both years, then you still need the above search and your main search to populate the data will be something like this to include both token sets and then timewrap to wrap previous year to current year search (earliest=$time.earliest$ latest=$time.latest$) OR (earliest=$prev_year_earliest$ latest=$prev_year_latest$) ... | timechart ... | timewrap 1y  
Yeah it still doesn't mark the URLs in my list that don't exist in the index as 0. I'll just accept your solution though, thanks for your help
Thanks @dmacintosh_splu for the response, but i doesn't really help me.  When i select the duration in the time picker, say from Jan 1, 2023 to May 1, 2023 , then my dashboard will have to use the tr... See more...
Thanks @dmacintosh_splu for the response, but i doesn't really help me.  When i select the duration in the time picker, say from Jan 1, 2023 to May 1, 2023 , then my dashboard will have to use the trend for the number of tickets in first panel and the trend for the number of the tickets in the second panel for the same duration for previous year (Jan 1, 2022 to May 1, 2022).  I am not sure how to frame the search query for extracting the tickets trend for previous year.
What ticketing system are you using? Are you trying to avoid modifying the saved search for the alert?
This Answer may be what you need.  https://community.splunk.com/t5/Dashboards-Visualizations/Panel-not-updating-when-changing-the-time-range-picker/m-p/332407/highlight/true#M21545
I had already tried that as well but with no luck.  It has to be something else that I missing.  Thanks for replying though.   If I figure it out, I'll post an update here.
We currently have an alert set up that generates a ticket in our ticketing platform. We are currently moving to a new ticketing platform and have utilized collect to collect the event and put it in a... See more...
We currently have an alert set up that generates a ticket in our ticketing platform. We are currently moving to a new ticketing platform and have utilized collect to collect the event and put it in a new index for that ticketing platform to pull data from. Is there a way to rename fields of the event that is collected, but not change the field names for the current alert? We have to have different field names for the new ticketing system to map correctly. My only idea right now is either duplicate the alert and have them run in parallel, or when the ticketing system queries splunk for new events, to have that query contain a search macro that does the renaming before the events are ingested,
Hi @Balaji.M, When did AppD set this up? Are you able to reach out to the person/team who helped set it up to ask them for additional help?
I am trying to create a Dashboard that hold multiple table of WebSphere App Server configuration data.  The data I have looks like this: {"ObjectType ":"AppServer","Object":"HJn6server1","Order":"1... See more...
I am trying to create a Dashboard that hold multiple table of WebSphere App Server configuration data.  The data I have looks like this: {"ObjectType ":"AppServer","Object":"HJn6server1","Order":"147","Env":"UAT","SectionName":"Transport chain: WCInboundDefaultSecure:Channel HTTP", "Attributes":{"discriminationWeight": "10","enableLogging": "FALSE","keepAlive": "TRUE","maxFieldSize": "32768","maxHeaders": "500","maxRequestMessageBodySize": "-1","maximumPersistentRequests": "100","name": "HTTP_4","persistentTimeout": "30","readTimeout": "60","useChannelAccessLoggingSettings": "FALSE","useChannelErrorLoggingSettings": "FALSE","useChannelFRCALoggingSettings": "FALSE","writeTimeout": "60"}} Where every event is a configuration section within an appserver where: ObjectType - AppServer Object - Name of Appserver (ex. "HJn6server1") Env - Environment. (ex. Test, UAT, PROD) SectionName - name within the appserver configuration that holds attributes. Attributes - configuration attributes for a SectionName I have been able to create one table per SectionName, but can't extend that to multiple sections.  I used the following code to make one table:  index = websphere_cct (Object= "HJn5server1" Env="Prod") OR (Object = "HJn7server3" Env="UAT") SectionName="Process Definition" Order [ search index=websphere_cct SectionName | dedup Order | table Order ] | fields - _* | fields Object Attributes.* SectionName | eval Object = ltrim(Object, " ") | rename Attributes.* AS * | table SectionName Object * | fillnull value="" | transpose column_name=Attribute header_field=Object | eval match = if('HJn5server1' == 'HJn7server3', "y", "n") Output:  Attribute HJn7server3 HJn5server1 match SectionName Process Definition Process Definition y IBM_HEAPDUMP_OUTOFMEMORY     y executableArguments [] [] y executableTarget com.ibm.ws.runtime.WsServer com.ibm.ws.runtime.WsServer y executableTargetKind JAVA_CLASS JAVA_CLASS y startCommandArgs [] [] y stopCommandArgs [] [] y terminateCommandArgs [] [] y workingDirectory ${USER_INSTALL_ROOT} ${USER_INSTALL_ROOT} y   What I would like to do is to create as many tables as there are SectionNames for a given comparison between two Objects. But I cannot figure out how to modify the code for allowing to have several tables in one dashboard for multiple SectionNames with their associated Attributes for two appservers in comparison.  Please help.   
I have 2 look up data and I want to join them through a common field MonthYear. I need to calculate transmission per dept = Total transmission *(size of dept/total size of dept) In lookup1 I need... See more...
I have 2 look up data and I want to join them through a common field MonthYear. I need to calculate transmission per dept = Total transmission *(size of dept/total size of dept) In lookup1 I need to calculate the propotion of size based on dept eg; Transmission for Eng dept = 119 *((100+23)/ 170) | inputlookup lookup1.csv | stats sum(size) as DeptMem by dept | eventstats sum(DeptMem) as TotalSize | append [inputlookup lookup2.csv | stats sum(Transmission) as TotalTransmission] | eventstats values(TotalTransmission) as TotalTransmission | eval "transmission per dept" = round(TotalTransmission * DeptMem / Totalsize, 2) | stats values('transmission per dept') as "transmission per dept" by dept Note: Based on your description, I believe that breakdown by dept is the goal. You cannot get total as you illustrated with "by MonthYear". You cannot get a pie chart with "by MonthYear" if you want to break down by dept Because you need a breakdown by dept, the only useful data in lookup2.csv is a total of Transmission. A single append is a lot more efficient than doing joins.
Hi, I have a dashboard that shows service tickets count based on different parameters.  Now I need to show a trend for current year and previous year for the duration selected by the user in the ti... See more...
Hi, I have a dashboard that shows service tickets count based on different parameters.  Now I need to show a trend for current year and previous year for the duration selected by the user in the time picker. For example, if the user selects time from Jan 1, 2023 to Apr 1, 2023 in the time picker , then I need to form a query to select the same duration of previous year( Jan 1, 2022 to Apr 1, 2022) and show the trend . How to create the previous year duration based on the duration selected in the time picker.  Please advise.
Pagination in table only appears in Edit mode of Splunk dashboard not in View. Can we correct this?
I have a table in Database that I need to check every 30 minutes,starting from 7.00 AM in the morning. The first alert i.e. at 7.00 AM should send the entire table without any checking any conditions... See more...
I have a table in Database that I need to check every 30 minutes,starting from 7.00 AM in the morning. The first alert i.e. at 7.00 AM should send the entire table without any checking any conditions.  Next here I have a field from the table named ACTUAL_END_TIME. This column can have only any of the three values, first a timestamp in HH:MM:SS format, second a String In-Progress, and third is again a String NotYetStarted. I need to check this table every 30 mins, and only trigger the alert when all the rows of the column ACTUAL_END_TIME have only timestamp. NOTE: The alert should trigger only once per day. How do I setup this alert
Hi, I’m using splunk docker image with HEC to send log. I got Success message as the guideline. How could I query the log to see “hello world”, which was what I just sent?I tried a few search related... See more...
Hi, I’m using splunk docker image with HEC to send log. I got Success message as the guideline. How could I query the log to see “hello world”, which was what I just sent?I tried a few search related curl commands but all of them just returns a very long xml. “hello world” is not in the response. Such as   curl -k -u admin:1234567Aa! https://localhost:8089/services/search/jobs -d "search *"  Could anyways share me a search curl command that can return "hello world" that I sent? I only have one record so I don't need complicated filtering.
The prefix is the part that comes *before* the timestamp string and must not describe the timestamp string itself.  The prefix for the sample event would be ^[
Glad to hear it!  Nice to know that the solution works on other systems as well.