All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a custom role which has limited capabilities, including  rest_apps_view rest_properties_get search The role needs to run the following search via the REST API and write the ouptut to a t... See more...
I have a custom role which has limited capabilities, including  rest_apps_view rest_properties_get search The role needs to run the following search via the REST API and write the ouptut to a text file on the originating server. | inputlookup xxx.csv | eval HASH=sha256(<FIELD B>+<FIELD C>) | table <FIELD A>, HASH I have created a user with the relevant role, and created a token for use in the curl request. If I run the above search in the UI it works fine, when I run the curl I get a FATAL response message - empty search. The curl I am using is: curl -k -X GET -H "Authorization: Bearer <token>" https://mysearchead.com:8089/servicesNS/<user>/<app>/search/jobs/export -d search='<my search>' -d output_mode=csv > output.csv So, my question is, which Splunk capabilities are required to be enabled for my custom role to successfully make a REST API call to the search/jobs/export endpoint?
I have an Index called myindex: NAME AGE CITY COUNTRY LEGAL AGE Denis 17 London UK NO Denis 18     YES Maria 17 Rosario Argentina NO Maria 18     YES Nani 11 ... See more...
I have an Index called myindex: NAME AGE CITY COUNTRY LEGAL AGE Denis 17 London UK NO Denis 18     YES Maria 17 Rosario Argentina NO Maria 18     YES Nani 11 Paris France NO   This is a basic example. The case is when LEGAL AGE=NO, there are several more fields available than when LEGAL AGE=YES. Notice that when LEGAL AGE=YES the field "CITY" and "COUNTRY" didn't exists at all. What I need to get are all the people of this index with all the information EVEN if they are not in LEGAL AGE. I use a join for this:   index=myindex "LEGAL AGE"=NO | join NAME [ search index=myindex "LEGAL AGE"=YES ]   The problem is that it is working only if the subsearch returns something. In this example, it will work for Denis and Maria, but not for Nani. How can I make it works even if subsearch is returning nothing?
Hi: I am testing out the new dashboard options with Dashboard Studio, and I am a bit confused as to how a feature works.  I want to use a base search 'index=nginx source="/var/log/nginx/access.log"'... See more...
Hi: I am testing out the new dashboard options with Dashboard Studio, and I am a bit confused as to how a feature works.  I want to use a base search 'index=nginx source="/var/log/nginx/access.log"', I have that setup in DataSource.  I then want to chain that to multiple modifiers.  For this end, I added a Chain search '| stats count by status', linked to the Parent Search above, I also created another chain search '| search splunk*' for some testing. If I create a dashboard panel graph (pie), and link it to the stats search, it says it can't find any data 'Search ran successfully, but no results were returned'.  If I click the magnifying glass from that, it returns results. If I have a table panel, and use the splunk search chain search, it finds results.  If I have a chained search that uses '| search site=splunk*', despite that field existing, it finds no results, but the magnifying glass does.  Can auto extracted fields not be used in this manner? The data in the source logs are all in the format <key>="value" for easy auto extraction of the fields. Thank you for any assistance/information you can provide.
Searches starting to take more time to execute and then getting deferred at 9:10 am everyday. Number of searches are same throughout the day. No extra searches running around that time.
I have a cluster consisting of some 6 or so indexers. I also have a search-head cluster consisting of 3 SH's. In webui I'm getting: The percentage of high priority searches delayed (76%) over the ... See more...
I have a cluster consisting of some 6 or so indexers. I also have a search-head cluster consisting of 3 SH's. In webui I'm getting: The percentage of high priority searches delayed (76%) over the last 24 hours is very high and exceeded the red thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=55. Total delayed Searches=42 The percentage of non high priority searches delayed (77%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=440. Total delayed Searches=339 Also, the users report problems with very slow refreshing dashboards and so on. But the splunk components themselves do not seem to be stressed that much. The machines have 64G RAM each and 24 (indexers) or 32 (search-heads) CPUs but the load is up to 10 on SH's or 12 on idxrs tops. If I do vmstat I see the prcessors mostly idling and about half of memory on search-heads is unused (even counting cache as used memory). So something is definitely wrong but I can't pinpoint the cause. What can I check? I see though that search heads are writing heavily to disks. Almost all the time. Maybe I should tweak some memory limits for SH's then to make it write to disk less? But which ones? Any hints? Of course at first it looks as if I should raise the number of concurrent searches allowed because the CPU's are idle but if storage is the bottleneck it won't help much since I'd be hitting the streaming to disk problem just with more concurrent searches.
Hi All, I have to show specific strings in my dashboard based on the metric value. So is it possible to show in metric value widget like if metric value =1 then show ABC and if metric value =2 th... See more...
Hi All, I have to show specific strings in my dashboard based on the metric value. So is it possible to show in metric value widget like if metric value =1 then show ABC and if metric value =2 then show XYZ? Basically need to add conditional output in widget-based metric value. The text or string to be shown is constant, so if metric value = 1 then it will be always ABC which needs to be shown in widget output. Regards, Gopikrishnan R.
Hi All, I have a requirement to store the Db agent custom metrics data to analytics and apply ADQL on those data to query specific output. Is it possible at all and if yes then how? Regards, Gopik... See more...
Hi All, I have a requirement to store the Db agent custom metrics data to analytics and apply ADQL on those data to query specific output. Is it possible at all and if yes then how? Regards, Gopikrishnan R.
Hi,   we have one inputlookup file X1.csv and one index=x2, we want to fetch alarm details from index for device name that is comman in inputlookup file. in Inputlookup file we have device name, L... See more...
Hi,   we have one inputlookup file X1.csv and one index=x2, we want to fetch alarm details from index for device name that is comman in inputlookup file. in Inputlookup file we have device name, Location, Category, IP and same device name we have in index=x1 so could you please help how we can fetch the alarm details and we can perform alarm time details like which time we have received alarm for devices.   Thanks and Regards,  Nikhil DUbey 4nikhildubey@gmail.com nikhil.dubey@visitor.upm.com 7897777118
hi  please suggest me how can i collect the windows event log without splunk universal forwarder
Hi all, i have a query for transaction, source="abc_data1_*" index="testing" sourcetype="_json" | transaction startswith=(STATUS="FAIL") endswith=(STATUS="SUCCESS") The events in the results are co... See more...
Hi all, i have a query for transaction, source="abc_data1_*" index="testing" sourcetype="_json" | transaction startswith=(STATUS="FAIL") endswith=(STATUS="SUCCESS") The events in the results are considered from most recent to oldest. But i want this  transaction to consider the the older data first to the processing. I want the data to be sorted from the beginning and then apply the transaction. "Reverse" doesn't work with this.Anyone knows how to do this?
I am consuming some data using an API, I want to calculate avg time it took for all my customer, after each ingestion (data consumed for a particular customer), I print a time matrix for that custome... See more...
I am consuming some data using an API, I want to calculate avg time it took for all my customer, after each ingestion (data consumed for a particular customer), I print a time matrix for that customer. timechart span=24h avg(total_time) Now to calculate average I cannot simply extract the time field and do avg(total_time), because if customerA completes ingestion in 1 hour, and customerB takes 24 hours, customer A will be logged 24 times and B will be logged once, giving me inaccurate results and bringing down the average. How do I create a filter let's say time duration is 7 days, so I get only those log lines for a particular customer which has the maximum total_time over a period of 7 days. i.e one log line per customer that has max total_time over a period of 7 days for that particular customer.
Hi,   splunk service on Search head is stopping frequently .After restarting splunk service it would come up. But now even after restarting splunk service is not starting on search head. I see be... See more...
Hi,   splunk service on Search head is stopping frequently .After restarting splunk service it would come up. But now even after restarting splunk service is not starting on search head. I see below error. ERROR ScriptRunner - Error setting up output pipe. ERROR AdminManagerExternal - External handler failed with code '-1' and output: ''. See splunkd.log for stderr output.   Any suggestions on how to fix
I want to search for endpoints  /api/work/12345678 i.e api/work/(8 digt number). My below query gives me all the three endpoint in the logs. I just only want the ones that are  /api/work/12345678.  ... See more...
I want to search for endpoints  /api/work/12345678 i.e api/work/(8 digt number). My below query gives me all the three endpoint in the logs. I just only want the ones that are  /api/work/12345678.  Search Query - cf_app_name="preval" cf_space_name="prod" msg="*/api/jobs/*"  My logs contain msg: abc - [2021-08-06T06:49:11.529+0000] "GET /api/work/12345678/data HTTP/1.1" 200 0 407 "-" "Java/1.8.0_222" msg: abc - [2021-08-06T06:49:11.529+0000] "GET /api/work/12345678 HTTP/1.1"  200 0 407 "-" "Java/1.8.0_222" msg: abc - [2021-08-06T06:49:11.529+0000] "GET /api/work/12345678/photo HTTP/1.1" 200 0 407 "-" "Java/1.8.0_222" Thanks
All my log statements are of below format. { "source": "stdout", "tag": "practice/myapplication:4444a76b917", "labels": { "pod-template-hash": "343242344", "version": "9216a76b917b8258a1ee6de... See more...
All my log statements are of below format. { "source": "stdout", "tag": "practice/myapplication:4444a76b917", "labels": { "pod-template-hash": "343242344", "version": "9216a76b917b8258a1ee6de7d3bbf9a78ca59f1f", "app_docker_io/instance": "my-application" }, "time": "1628235185.043", "line": "2021-08-06T07:33:05.043Z LCS traceId=a83a082592cf2275, spanId=a83a082592cf2275 LCE [qtp310090733-278] ERROR c.p.p.c.a.ErrorHandlerAdvice.logErrorDesc(34) - ERROR RESPONSE SENT", "attrs": { "image": "practice/myapplication:4444a76b917", "env": "dev", "region": "local", "az": "us-west" } }   i want to extract the timestamp from beginning of each line and sort my results based on that timestamp. I have no idea of splunk search queries. can someone help?
Hello guys, Iam creating a dashboard which show some statistics about the UFs of our environment. By finding a good solution for the amount of events delivered per index, I noticed something I cant... See more...
Hello guys, Iam creating a dashboard which show some statistics about the UFs of our environment. By finding a good solution for the amount of events delivered per index, I noticed something I cant explain at the moment. Hopefully you can bring light in the dark. For my understanding: # The amount of indexed events on the indexer by the forwarder itself | tstats count as eventcount where index=* OR index=_* host=APP01 earliest=-60m@m latest=now by index, sourcetype | stats sum(eventcount) as eventcount by index index eventcount _internal 11608 win 1337   # The amount of events which are forwarded by the forwarder  index=_internal component=Metrics host=APP01 series=* NOT series IN (main) group=per_index_thruput | stats sum(ev) AS eventcount by series series eventcount _internal 1243 win 2876   But both of them are delivering different values for the same timerange (60min) Has anyone an idea why this is happening? Thanks. BR, Tom
We created new STG Splunk Alerts and enabled them starting July 27. The strange thing is that they cannot send emails to prj-sens-test@mail.rakuten.com and MS teams email 581e7bfc.OFFICERAKUTEN.onmic... See more...
We created new STG Splunk Alerts and enabled them starting July 27. The strange thing is that they cannot send emails to prj-sens-test@mail.rakuten.com and MS teams email 581e7bfc.OFFICERAKUTEN.onmicrosoft.com@apac.teams.ms for any new alert that happens.       Since we migrated to a new system, we cloned our old STG Splunk Alerts and then updated the name and also the sourcetypes for the new STG Splunk Alerts. Everything else, schedule, email recipient, subject and email message are the same. We have deleted the old STG Splunk Alerts. Our last email from STG Splunk Alert was on July 28, which was from the old Splunk Alert.       We are wondering why it suddenly stopped sending emails. May I ask if you have any ideas?    This is only an issue in STG Splunk. New alerts in PRD Splunk are not  working properly.       Our new alerts are here https://stg-asplunksrch101z.stg.jp.local/en-US/app/sens/alerts       This is for STG splunk with the following details:   User name: user_sens   Splunk host: https://stg-asplunksrch101z.stg.jp.local/   Group name: Ichiba Business Expansion Group   App team name: ibe   Service ID: 1013
We are planning to use Infra-as-Code(IAC) for Splunk Cluster implementation. Hence, can anyone please advise if there is an api to bootstrap a SHC ?
Please help sql when connecting to different IPs is successful. filed list ip -> src_ip access -> success (filed is You can change it to something comfortable.)
I need assistance regarding a side project of mine, is there a way to only extract data from a certain location to create a dashboard? I need to create a dashboard that will give me a consolidated o... See more...
I need assistance regarding a side project of mine, is there a way to only extract data from a certain location to create a dashboard? I need to create a dashboard that will give me a consolidated output and then segregate them individually by count and other details. I checked but only found docs that had wehbook that pushed data from splunk to other tools, but I need a way to pull the data from a tool.
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv f... See more...
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv file. is there a way or similar condition to get log events saying this serivce is getting error is this host with the message