All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, Working on a dash for which the goal is to automate manual data entry which needs to take place over 100s of spreadsheets. Data in question is a stats table showing relation... See more...
Hi everyone, Working on a dash for which the goal is to automate manual data entry which needs to take place over 100s of spreadsheets. Data in question is a stats table showing relations on an xy grid, to be copy pasted into a spreadsheet. Some grids are tiny, eg. 4x4 but some are truly ridiculous. The dash is taken care of with db data ingested daly and outputing results to a filterable stats table. People can filter by their specific spreadsheet output needs and copy paste straight into excel. This is mostly working and saving tonnes of manual info gathering but unfortunately, for the unlucky people with the ridiculous grids eg. 100x1000. copy/pasting off the dash with multiple pages becomes another problem. Question being, what efficient ways can i get multiple people the most usable access to their specific filtered extracts from one dash table? eg. making it fully self-service for all. Ideally it'd be great if automatically a report or something just did everything and mailed everyone, but that creates a new nightmare, with 100s of unique extracts & 100s of people to receive them... I figure i need some way everyone can visit the dash, dropdown select their id which filters their specific table grid, and they get an option to either automatically copy the whole table (click a button its copied), or download a csv extract or something. Is anything like this possible from a dash? Any ideas?
  We have a problem when users try to query a specific index. They can query all of other indexes from this role, but not just this new one. The role is correctly configured as it's done by Profes... See more...
  We have a problem when users try to query a specific index. They can query all of other indexes from this role, but not just this new one. The role is correctly configured as it's done by Professional Service resource, we verified authorize.conf, but only users with the roles "admin" or "users" can query this index.   - etc/system/local/authorize.conf has below; [role_problem] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 3 srchIndexesAllowed = index_good1;index_good2;index_problem   We try to create a new role, including access to "all non-internal indexes" (like admin and users roles) but it doesn't work either. There is no problem on another platform with the version 9.0.4, however, this one of the version 8.1.3 with the same configs fails to search the index. We analyzed the 'inspect job' as well but we didn't find any problem. There is no permission issues logged in splunkd.log or search.log, simply no data returns from the indexers.
Hello, I have an search that is used on a dashboard that I would like tweaked. Currently this search/panel displays the variance of current hour over the same hour the week before. for example: The... See more...
Hello, I have an search that is used on a dashboard that I would like tweaked. Currently this search/panel displays the variance of current hour over the same hour the week before. for example: The value at hour 10 on Wed 7/19/23 will be compared to the value at hour 10 on Wed 7/12/23 and give variance. Instead, I would like to compare current hour value to the value of the AVG of that same hour over the last 2 weeks (instead of compared to 1 day). For example I would like hour 10 on Wed 7/19/23 to be compared to the avg of hour 10 each day from Tues 7/18/23 to Wed 7/5/23.   Current search: | tstats count where index=msexchange host=SMEXCH13* earliest=-14d@d latest=-13d@d by _time span=1h | eval hour=strftime(_time,"%H") | eval ReportKey="2weekprior" | stats values(count) as count by hour, ReportKey | append [| tstats count where index=msexchange host=SMEXCH13* earliest=-7d@d latest=-6d@d by _time span=1h | eval hour=strftime(_time,"%H") | eval ReportKey="1weekprior" | stats values(count) as count by hour, ReportKey ] | append [| tstats count where index=msexchange host=SMEXCH13* earliest=-0d@d latest=-0h@h by _time span=1h | eval hour=strftime(_time,"%H") | eval ReportKey="currentweek" | stats values(count) as count by hour, ReportKey ] | eval currenthour=strftime(_time,"%H") | xyseries hour, ReportKey, count | eval nowhour = strftime(now(),"%H") | eval comparehour = nowhour-1 |where hour<=comparehour |sort by -hour | table hour,nowhour,comparehour, currentweek,1weekprior,2weekprior |eval 1weekvar = currentweek/'1weekprior' |eval 2weekvar = currentweek/'2weekprior' |eval variance=round(((('1weekvar'+'2weekvar')/2)*100)-100,2) |table hour,variance |head 5
So we have this alert set up to check to see if any hostnames that are being monitored havnt received any time monitoring data. The current search is as follows:   |inputlookup TimeServersV2.csv | ... See more...
So we have this alert set up to check to see if any hostnames that are being monitored havnt received any time monitoring data. The current search is as follows:   |inputlookup TimeServersV2.csv | search server="*" | eval HOST =lower(server) | fields HOST | where NOT [search (index=os sourcetype=test_stats*) OR (sourcetype=syslog ptp10 OR phc10sys) OR (index=windows sourcetype="Script:TimeStatus") OR (index=windows sourcetype=domtimec) OR (index=os sourcetype=time)| dedup host | eval HOST=lower(host)| fields HOST ]     The issue with this is, we believe, once it runs at 8 AM,  it takes a bit longer to run and process data, and it'll send out partial results after a min or so of running the query. We have a lot of saved reports/alerts/searches running at the top of most hours, so I think it may be sending out the incomplete search results after a bit of running, as splunk starts the next job. I moved its cron job schedule up an hour and a half to a lighter use hour, so that may help a bit, but i would also like to optimize this search so it runs faster. Currently, it runs about 40 seconds to a little over a minute: What would be the best way to optimize this search so it could possibly be run in under 30 seconds, if possible. Running it outside of the scheduled time runs in about 6 seconds, its just slow when it runs alongside all of the other searches. Itll send us an alert with a list of hostnames its found that were not on the list, yet when we run it manually, it will only spit out 4 or 5 results. That's why we think its not finishing the search when it sends out an alert. Any help would be appreciated. 
Hi, I have a lookup file which has ClientName,ostype,currentforwarderversion   I wanted to know which Client is reporting to which deployment server.  can we have a query to find and how to us... See more...
Hi, I have a lookup file which has ClientName,ostype,currentforwarderversion   I wanted to know which Client is reporting to which deployment server.  can we have a query to find and how to use the client name as a filed value in the query  Thanks
Hello, I'm trying to find an app or add-on for my cisco wlc.  Should I use the current cisco IOS add-on to index wlc data or try another add-on?  I want to ensure wlc data is separate from ios data.... See more...
Hello, I'm trying to find an app or add-on for my cisco wlc.  Should I use the current cisco IOS add-on to index wlc data or try another add-on?  I want to ensure wlc data is separate from ios data. Thank you
Hi, we’ve had a problem recently where data has stopped flowing to an index, and it’s a few days before we find out and then resolve. Does anyone know of a splunk 9.x feature or an add-on that you ca... See more...
Hi, we’ve had a problem recently where data has stopped flowing to an index, and it’s a few days before we find out and then resolve. Does anyone know of a splunk 9.x feature or an add-on that you can use to monitor / alert when data stops for a set amount of time?
Hello everyone! Quick overview of the Strategies & Rules for Answers-a-thon: Strategy This is an a Splunk Answers competition, so the quality category can include items like links to examples t... See more...
Hello everyone! Quick overview of the Strategies & Rules for Answers-a-thon: Strategy This is an a Splunk Answers competition, so the quality category can include items like links to examples to other posts in the forum, links to Splunk documentation, etc. The teams can strategize to split up the questions based on expertise. Teams can also strategize to submit answers as they go or submit at the end so that other teams can’t copy answers. You can try to eavesdrop and answer questions other teams are not answering. What strategy will you adopt?! Rules 1.Teams will have 20 minutes to answer as many questions as they can! 2. Each question will have a difficulty level between 1 and 4. 3. Quality/thoroughness of Answer will be judged on a scale of 1-5. 4. If your answer gets chosen to be the selected answer you get an extra point 5. Answers must be at least partially correct to receive points. 6. A maximum of 10 points can be awarded per question. 7. In the event of a tie, a tiebreaker question will be revealed- The team that comes closest to the correct answer wins. 8. The team with the highest score at the end of the game wins. Have fun everyone!
Hello, I am working on a query where I need to set an alert based on failure percentages. Calculating the failure percentage is the tricky part. Here is my sample query -  index=myindex (status=suc... See more...
Hello, I am working on a query where I need to set an alert based on failure percentages. Calculating the failure percentage is the tricky part. Here is my sample query -  index=myindex (status=success OR status=inprogress) | bin _time | stats count(eval(like(status, "success"))) as success count(eval(like(status, "inprogress"))) as inprogress by id _time   The conditions for access and failure are as below - Success -  | where success = 1 AND inprogress >=1 Failure - | where success = 0 AND inprogress >=1 Now I want to create an alert based on failure percentage of 10%. How do i calculate the failure and success percentage here? The id you are seeing in the BY clause is nothing but customer ID so I'd like to get alerted based on 10% failure, Best Regards
I calculated i month percentage for status online. and I table to date percentage. displaying it into visualization column chart. I want my chart to reflect different color base on the percentage ran... See more...
I calculated i month percentage for status online. and I table to date percentage. displaying it into visualization column chart. I want my chart to reflect different color base on the percentage range  e,g 0-40 red 40-60 orange 60-80 yellow 80-100 green.   I modify in source <option name="charting.fieldColors">Percentage:0-40:#f13f56,40-60:#f1813f,60-80:#f1da3f,80-100:#72ae0d</option> but no success  can someone assist me is the doable in Splunk visualition. Thanks in advance
Hello all, I have following use case: We have a dropdown filter which reads out the values you can choose from a lookup. Now we would like to implement a simple button that is "hidden" by default ... See more...
Hello all, I have following use case: We have a dropdown filter which reads out the values you can choose from a lookup. Now we would like to implement a simple button that is "hidden" by default but should appear once the specific value "Alaska" is selected in the dropdown filter. How can I implement this? Somehow I cant make it work. I would appreciate every help. Thank you in advance!
Does AppDynamics have some metric regarding the GC cycle & consumption on SAP application server or Hana db processes?
Hello,   I have the following code for a bar chart that I need to show stacked the results from the 3 ifs that I have. The code retrieves data by week number and divides then by each day of the wee... See more...
Hello,   I have the following code for a bar chart that I need to show stacked the results from the 3 ifs that I have. The code retrieves data by week number and divides then by each day of the week. Is it possible to group the data by week number showing each result for the day of the week stacked by the results of the 3 if's that I have? index="" host= sourcetype=csv [search index="" host= sourcetype=csv source=C:\\2023-CW28_2.csv | dedup source | table source | sort - source | head 1 ] | table iswID, iswTitle, iswSD, pverID, pverSF | where iswSD >= strftime(relative_time(now(), "-3w@w"),"%Y-%m-%d") | eval Week=strftime(strptime(iswSD,"%Y-%m-%d"),"%V") | eval Day=strftime(strptime(iswSD,"%Y-%m-%d"),"%A") | eval ISWGT=if(iswSD>pverSF,1,0) | eval ISWLE=if(iswSD<=pverSF,1,0) | eval non_mapped= if(match(pverID,""), 1,0) | chart sum(ISWGT) as "iswSD gt pverSF", sum(ISWLE) as "iswSD LE pverSF", sum(non_mapped) as "Non Mapped" by Week,Day  
Grateful if anyone can help or guide me in the right direction. I am running a search against a lookup table. The output is a list of websites that were accessed. The website and source address are ... See more...
Grateful if anyone can help or guide me in the right direction. I am running a search against a lookup table. The output is a list of websites that were accessed. The website and source address are in index1. I want to use the source address to search in index2 to locate the user assigned to that IP address.  Matching is working well and I am stuck how to proceed with the 2nd search query.     index=index1 domain=* OR index=index2 | lookup weblist.csv domain AS domain OUTPUT domain AS MATCHED | where isnotnull(MATCHED) | table _time, MATCHED, src, user     In Index2, src_ip and user fields exist.  
Hello, I need to modify  _time value based on ... _time value.   If:  1) original _time is before working hours, than set new _time to working hour start, like: org _time after 00:00 before 9:00... See more...
Hello, I need to modify  _time value based on ... _time value.   If:  1) original _time is before working hours, than set new _time to working hour start, like: org _time after 00:00 before 9:00 then set new _time to 9:00 2) original_time is within working hour, leave it 3) original _time is after working hours, that modify it to next day working hour start, like org _time is after 16:00 and before 24:00, then set new _time to 9:00 of next day   Anyone has an idea?   Regards
Hello, I would like to make a stacked column chart with number of errors by hour and error type (warning, error, etc) The log lines look like this:   [2023-07-19T03:55:16,043][ERROR][o.o.s.i.Dete... See more...
Hello, I would like to make a stacked column chart with number of errors by hour and error type (warning, error, etc) The log lines look like this:   [2023-07-19T03:55:16,043][ERROR][o.o.s.i.DetectorIndexManagementService] [opensearch-cluster-master-2] info deleteOldIndices     I was filtering out INFO messages, parsing the error type using a regex (which both work so far), but I cannot group it by error type.     index=* "pod"="*opensearch*" | search NOT "[INFO ]" | rex field=_raw "^\[([0-9\-T:,]*)\]\[(?<type>[A-Za-z ]*)\]" | timechart span=1h count ```| stats avg(count) as count by Hour type``` | chart avg(count) AS count BY Hour type     I only get one value per hour, labeled as "count". Any suggestion how I could split it by Hour *and* type? Thank you!
Hello, We have an alert which is configured to send emails to users through an SMTP server. We notice that yesterday that particular alert created 63 events and while others successfully sent email... See more...
Hello, We have an alert which is configured to send emails to users through an SMTP server. We notice that yesterday that particular alert created 63 events and while others successfully sent emails, there was an error while sending one alert. ERROR:root:Connection unexpectedly closed: [Errno 104] Connection reset by peer while sending mail to "XXX@domain.com" We are not sure if this is related to Splunk or should we check this on the SMTP server side. Can someone please guide? This error occurred only once. The email flow was restored after a while. Regards.
I want to extract the message that is 'until-successful' retries exhausted from the below logs. And also a second rex query to extract both message and element and get in the table. Any Help will b... See more...
I want to extract the message that is 'until-successful' retries exhausted from the below logs. And also a second rex query to extract both message and element and get in the table. Any Help will be appreciated. { [-]    logger: org.mule.runtime.core.internal.exception.OnErrorPropagateHandler    message:  ******************************************************************************** Message            : 'until-successful' retries exhausted  Element            : bmw-sl-nsp-case-readSub_Flow/processors/1 @ bmw-sl-nsp-prd-api:write/bmw-sl-nsp-case-read.xml:88 (Until Successful) Element DSL         : <until-successful maxRetries="${max.retries}" doc:name="Until Successful" doc:id="b76dd101-8752-43aa-ab94-d548b699ea7a" millisBetweenRetries="${time.between.retires.case}"> <http:request method="GET" doc:name="Get Cases" doc:id="b846734d-4ff0-479d-bc21-e112cd9e8919" config-ref="HTTP_Request_configuration" path="${schedular.getcases.target.path}" sendCorrelationId="ALWAYS" correlationId="#[correlationId]"> <http:query-params><![CDATA[ #[output application/java --- { "startTimestamp" : vars.startTimestamp, "country" : vars.currentCountry, "endTimestamp" : vars.endTimestamp, "businessUnit" : vars.currentBusinessUnit }]
Hi Team, i am using this search to check the status of UF"down based on last connection time. but when i am removing the server from deployment server ,still this search output showing the server i... See more...
Hi Team, i am using this search to check the status of UF"down based on last connection time. but when i am removing the server from deployment server ,still this search output showing the server in down status, it should not visible in splunk search.   need suggestion on this to get exact output for down status
Hi, let me first state that I am very new to Splunk. How can I do the following please? I would like to add a column called Department to my table. The department value is not part of the event dat... See more...
Hi, let me first state that I am very new to Splunk. How can I do the following please? I would like to add a column called Department to my table. The department value is not part of the event data. It is  something I would like to assign based on the value of host:  Department  Hosts IP Address Sales host1 15.20.10.5   host2 15.20.10.15   host3 15.20.10.25 HR host4 15.20.10.35   host5 15.20.10.45   host6 15.20.10.55 IT host7 15.20.10.65   host8 15.20.10.75   host9 15.20.10.85    I also would like to create a Department dropdown menu that filters hosts based on department (dashboard). Thank you for your time. I appreciate all your help