All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a lookup table that i have uploaded to Splunk. I added a lookup definition for it, and the permissions on both the table and the definition are global (read all and shared among all apps). Bot... See more...
I have a lookup table that i have uploaded to Splunk. I added a lookup definition for it, and the permissions on both the table and the definition are global (read all and shared among all apps). Both table and definition are stored in the search app context, but that shouldn't matter when they are shared among all apps, right? However when i go to add a lookup field to a dataset to enrich the data stored in said dataset, the drop down from which you select the lookup to use doesn't have the aforementioned custom lookup in it. In fact the drop down list only extends as far as lookups beginning with 'T' then stops. So even though we have the Splunk_TA_Windows apps installed, many of those lookups are not present in the dropdown either, despite having similar global visibility and permissions as my custom lookup. Any one else encountered this? Am i missing something?
I am currently working on the architecture design for our Splunk platform in AWS We have ES and are planning to leverage Smart Store for low cost data retention. I was reading through the pre-req... See more...
I am currently working on the architecture design for our Splunk platform in AWS We have ES and are planning to leverage Smart Store for low cost data retention. I was reading through the pre-reqs of Smart Store. and one of the pre-reqs states, "For SmartStore use with Splunk Enterprise Security, confirm that you have enough local storage available to accommodate 90 days of indexed data, instead of the 30 days otherwise recommended. See Local storage requirements."   Now if our data retention requirement itself is a total 90 days worth of data, out of which we are planning to store 50 days worth of data on local fast storage (to save on cost which is the whole idea behind using SS) but if  local disk for 90 days worth of indexed data is mandatory, is it even worth considering S3 ? Could anyone please help with some advice on this ?
Hello everyone Do splunk write any internal logs that it (splunk) needed to update? I mean when we enter web-interface, there is appropriate message, that new version is available. Is there same me... See more...
Hello everyone Do splunk write any internal logs that it (splunk) needed to update? I mean when we enter web-interface, there is appropriate message, that new version is available. Is there same message in any splunk internal logs?   Thank you.
Im looking to get a query that will tell me the difference in an error rate increase i.e 5 minutes ag it was 120 errors but now is above 10%. My current search is as per the below   index=aws_kuber... See more...
Im looking to get a query that will tell me the difference in an error rate increase i.e 5 minutes ag it was 120 errors but now is above 10%. My current search is as per the below   index=aws_kubernetes app=nio tag=error env=prd* | timechart span=1m count by app limit=0   this will show me the standard error rate over time so need to know when a percentage increase happens
Hello Team,  I have about 10K keywords to search. It is not practical to construct a large query like below  index=dev (key=val1 OR key=val2 OR key=val3.....key=val10000) Is there any other way to... See more...
Hello Team,  I have about 10K keywords to search. It is not practical to construct a large query like below  index=dev (key=val1 OR key=val2 OR key=val3.....key=val10000) Is there any other way to search? Thanks Phaniraj
Hi Splunk Support Team. I am using Splunk trial version for training/learning purpose which was activated on 2nd Sept only. I am getting the error  "Hmmm… can't reach this page"   Today i am not ... See more...
Hi Splunk Support Team. I am using Splunk trial version for training/learning purpose which was activated on 2nd Sept only. I am getting the error  "Hmmm… can't reach this page"   Today i am not able to login to Splunk and is getting the above error on all browsers. Requesting you to look into this and advise.
Hi, I want to send custom user data from components dynamically. How can I initialize appdynamics in a component instead of index.html?
Hello, Is there an option to set an alert that will raise only after the search reached the threshold twice ? thanks
Hi Team, I would need someone's help to accomplish the requirement. I have less knowledge on css. so seeking your help. I have about 7 dashboards created, I would want one dashboard to be created h... See more...
Hi Team, I would need someone's help to accomplish the requirement. I have less knowledge on css. so seeking your help. I have about 7 dashboards created, I would want one dashboard to be created having those dashboard names in the tiles view with the drilldown enabled for the respective dashboards. Here tile view has to be created to the input values. Please find the image attached the way i wanted the dashboard to be shown. Here in the image values shown are item1 item2, .. Note: i need the css and html to be written in the dashboard instead having them in css file.
Here is a log example -  {"log_time":"2021-08-27T07:16:46.178275260+00:00","output":"stdout","log":"2021-08-27 07:16:46.178 [INFO ] [her-49] a.a.ActorSystemImpl - Logged Request:HttpMethod(POST... See more...
Here is a log example -  {"log_time":"2021-08-27T07:16:46.178275260+00:00","output":"stdout","log":"2021-08-27 07:16:46.178 [INFO ] [her-49] a.a.ActorSystemImpl - Logged Request:HttpMethod(POST):http://id-test.api-gateway.sit.ls.api.com/repos/hrn:idmrepo-sit::ol:idm_team_internal_test/ids/getOrCreate/bulk?streaming=true&dscid=GvaIrM-cb4005f6-a828-4fd7-9f54-6082e2912716:200 OK:4","k8scluster":"borg-dev-1-aws-west-1","namespace":"*","env":"DEV","app_time":"","time":"1630048606.178275260"}   I need to extract the digits after "OK:" (here highlighted in red color) as time in ms.  I am just started using splunk. I am trying this -  rex "([^\:]+$)(?P<duration>.+)" | stats exactperc98(duration) as P98 avg(duration) as AVG by log But this is not working.
Hello, How, I would  write the regex for the  following events (3 sample events provided below). It has "," pair delimiter, but " (quotation) are missing  for one value (cit, shown in Bold) for some... See more...
Hello, How, I would  write the regex for the  following events (3 sample events provided below). It has "," pair delimiter, but " (quotation) are missing  for one value (cit, shown in Bold) for some events. Any help will be highly appreciated, thank you. "time_stamp":"2021-08-21 16:27:06 CDT","app_name":"CT-SATCUP","user_type":"TFilter","file_source_cd":"1","TFilterType":"0","ip_addr":"2300:1700:5c08:1030:6d93:7462:b15d:185c","session_id":"k/NJGhc8dU3OtYoRsrJ+pQzDdYE=","cit":"23235672174,"request_id":"32as3eee0a-0a31-6214a4e28-7e7fc700-6d792b5b203e","user_id":"cit1ddf82-bf36-40ca-84ae-7964b5680564","return_cd":"00","event_type":"SATCUP"   "time_stamp":"2021-08-21 16:27:05 CDT","app_name":"CT-SATCUP","user_type":"TFilter","file_source_cd":"1","TFilterType":"0","ip_addr":"2400:1700:5c08:1030:6d93:9462:b15d:185c","session_id":"k/NJGhc8dU3OtYoRsrJ+pQzDdYE=","cit":"232356756174","request_id":"31as3eee0a-0a31-6214a4e28-7e7fc700-6d792b5b203e","user_id":"cit1ddf82-bf36-40ca-84ae-7964b5680564","return_cd":"01","event_type":"SATCUP"   "time_stamp":"2021-08-21 16:27:08 CDT","app_name":"CT-SATCUP","user_type":"TFilter","file_source_cd":"4","TFilterType":"0","ip_addr":"2100:1700:5c08:1030:6r93:7462:b15d:185c","session_id":"k/NJGhc8dU3OtYoRsrJ+pQzDdYE=","cit":"232356756174,"request_id":"31bs3eee0a-0a31-6214a4e28-7e7fc700-6d792b5b203e","user_id":"cit1ddf82-bf36-40ca-84ae-7964b5680564","return_cd":"01","event_type":"SATCUP" 
Aside, from how splunk create a checksum for each deployment app bundle. Is there command via linux CLI that you can cross check the same numerical value created for each deployment app bundle? I kno... See more...
Aside, from how splunk create a checksum for each deployment app bundle. Is there command via linux CLI that you can cross check the same numerical value created for each deployment app bundle? I know we can check it via serverclass.xml on the deployment client side.
We have multiple TraceIDs that have same payload and this payload is part many logs for a given TraceID. Here foo1 is a common payload for multiple TraceIDs 1, 3, 4. Is it possible to search for al... See more...
We have multiple TraceIDs that have same payload and this payload is part many logs for a given TraceID. Here foo1 is a common payload for multiple TraceIDs 1, 3, 4. Is it possible to search for all unique traceIDs 1, 2 only based on the payload, then get all of the logs for these traces? Input: TraceID Type             Name        Payload 1               HEADER     first            foo1 2               HEADER     first            foo2 3               HEADER     first            foo1 4               HEADER     first            foo1 Output: TraceID Type             Name        Payload 1               HEADER     first            foo1 2               HEADER     first            foo2 You can get unique traceIds grouped by Payload using stats max(traceId) as maxTraceId, min(traceId) as minTraceId by payload Now, how do we feed the maxTraceId into another search? We need all of the logs for these TraceID 1, 2 only. These requests did not work. some_search [ search some_search | stats max(traceId) as maxTraceId by payload | fields maxTraceId ] some_search [ search some_search | streamstats max(traceId) as maxTraceId bypayload | fields maxTraceId ] some_search | where traceId IN [ search some_search | stats max(traceId) as maxtraceId by paload | fields maxtraceId ] TraceID Type             Name        Payload 1               HEADER     first            foo1 1               BODY           second     bar1 1               FOOTER      third           baz1 2               HEADER     first            foo2 2               BODY           second     bar2 2               FOOTER      third           baz2
hi everybody, i used this request with the user rest-api-reportingweb , i want write ine a kvstore lookup: | makeresults | eval Category = "HOST Blacklist" | eval activation = "09/15/21" | e... See more...
hi everybody, i used this request with the user rest-api-reportingweb , i want write ine a kvstore lookup: | makeresults | eval Category = "HOST Blacklist" | eval activation = "09/15/21" | eval target = "Un test ajout" | eval url = "http://www.test.html" | eval tester = "*test.html*" | eval key=Category.tester.target | table key,Category,activation,target,tester,url | outputlookup t_PROXY_lookup append=True override_if_empty=false key_field=key   i have this error :  Error in 'outputlookup' command: Lookup failed for collection 'Condition_List_Mcafee' in app 'Splunk_For_Cnaf_Secuteams' for user 'rest-api-reportingweb': User 'rest-api-reportingweb' with roles { rest-api-reportingweb, si_cnaf, user, wan } cannot write: /nobody/Splunk_For_Cnaf_Secuteams/collections/Condition_List_Mcafee { read : [ * ], write : [ admin, power ] }, owner: adm0-ahuli755, removable: no, modtime: 1614188730.883726000. I give permissions in  lookup definitions for this user i cant for lookup file beause for kvstore file dont appear. app/local/collections.conf : [Condition_List_Mcafee] field.Category = string field.activation = string field.target = string field.tester = string field.url = string replicate = true app/local/transforms.conf  : [t_PROXY_lookup] external_type = kvstore collection = Condition_List_Mcafee case_sensitive_match = true match_type = WILDCARD(tester) fields_list = _key,Category,url,activation,target,tester app/metadata/local.meta [transforms/t_PROXY_lookup] access = read : [ * ], write : [ admin, power, rest-api-reportingweb ] export = system owner = nobody version = 7.3.3 modtime = 1632255805.643188000  app/lookups/lookup_file_backups/Splunk_For_Cnaf_Secuteams/nobody i dont see the file in this directory    What i miss ??  Thanks for your help    best regards  Alexandre
I have a simple accelerated report that looks like this:   index=hosts | stats count by hostname ip   I now want to dashboard that but use a timechart. However, timechart doesn't work because _ti... See more...
I have a simple accelerated report that looks like this:   index=hosts | stats count by hostname ip   I now want to dashboard that but use a timechart. However, timechart doesn't work because _time wasn't included in the stats count by section. index=hosts | stats count by hostname ip | timechart span=1d count I can't just add _time to the stats section in the accelerated report because it increase the amount of rows 1000x fold. I assume creating the accelerated report with timechart will cause the same 1000x issue. So is there a way around this with accelerated reports that won't cause my accelerated summary to grow 1000x? How an I access the _time? I'm feeling like it's not possible and that this is a better scenario for traditional summary indexing. Am I wrong?
Hi Everyone, Any help would be appreciated. We have 4 Splunk instances that work together in tandem. All four servers are Virtual Machines running Red Hat Enterprise Linux 8 Splunk Enterprise 8.2.2... See more...
Hi Everyone, Any help would be appreciated. We have 4 Splunk instances that work together in tandem. All four servers are Virtual Machines running Red Hat Enterprise Linux 8 Splunk Enterprise 8.2.2. VCenter is 6.7 with 4 ESXI Host each running 6.7 as well.   The Four Splunk VMs are running very high CPU capacity at all times: 45.8 GHz 83.44 GHz 45.6 GHz 83.82 GHz   It is basically running our ESXi Hosts to full capacity. I logged onto each server and ran the top -i command and each server states very low CPU usage. Does anyone have any recommendations? Any help would be greatly appreciated.   Thank you,
If my index rolls off data at 30 days, and I run an accelerated report every day to build a summary for that day, will the summary have data going back a year eventually? Or is it limited to 30 days ... See more...
If my index rolls off data at 30 days, and I run an accelerated report every day to build a summary for that day, will the summary have data going back a year eventually? Or is it limited to 30 days because of my index setting?
I'm working on building a remote deployment for the Splunk Universal Forwarder with PDQ Deploy on our Windows 10 computers.  I can run the initial splunk forwarder .msi installation without issue, bu... See more...
I'm working on building a remote deployment for the Splunk Universal Forwarder with PDQ Deploy on our Windows 10 computers.  I can run the initial splunk forwarder .msi installation without issue, but when I try to run the .spl file to sync the computer to our Splunk cloud environment, it errors out every time. The command I'm using works fine when I run it locally, but I get "login failed" when I run it through PDQ. cd "C:\Program Files\SplunkUniversalForwarder\bin" splunk install app \splunkclouduf.spl -auth username:password Is there a tweak I can make to the command or another way to accomplish the sync to our cloud environment? Thanks in advance!
Can not access cloud trial instance with sc_admin account This is the 2nd time trial has been interrupted. yesterday i was able to send reset. received password reset email and then i was back in. ... See more...
Can not access cloud trial instance with sc_admin account This is the 2nd time trial has been interrupted. yesterday i was able to send reset. received password reset email and then i was back in. this morning sc_admin account was not working. I tried same process to reset password and ive tried 7 various times today and never get a password reset email. I've even called support and that person was unable to help me get back in to the trial instance. can someone please advise? Please note i have not changed password for sc_admin account either time that it changed or became inactive somehow. Consecutive days overnight the password does not allow me to access instance. This time the password reset process does not appear to be working either. I'd also like to mention that its not a very good potential customer experience when a support person on phone can't even figure out how to get you back in to instance after speaking with their channels.