All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, Could you please help me out with the following query? We have a TA-Okta_Identity_Cloud_for_Splunk installed on a Heavy Forwarder. Our customer receives  “skinny_user” rate limit w... See more...
Hello everyone, Could you please help me out with the following query? We have a TA-Okta_Identity_Cloud_for_Splunk installed on a Heavy Forwarder. Our customer receives  “skinny_user” rate limit warnings from /api/v1/apps endpoint. Following Okta documentation, it is suggested to change a limit to the default value of 20: https://www.okta.com/integrate/documentation/security-enforcement-integrations/security-analytics/#apps-26    However, when I check limits in our Add-On, the ranges of  User, Group, App and Log Limits (min/max values) are different from the ones in Okta documentation.  Could you please help me finding the right limit which I should adjust in Splunk? Thank you!
Hi, In my dashboard i have set of inputs and when i submit the values gets stored in a lookup file.  2 dropdowns , 1 multiselect and 1 text field Can i store the values from the multiselect into s... See more...
Hi, In my dashboard i have set of inputs and when i submit the values gets stored in a lookup file.  2 dropdowns , 1 multiselect and 1 text field Can i store the values from the multiselect into separate records in the lookup or how do i expand as all are clustered like this, not sure how to give a separator. Multiselect  name - Type AAA ttt BBB fff CCC eee hhh qqq ... in the lookup file it shows like - AAA ttt BBB fff CCC eee hhh qqq. how do i separate it?
Hi everyone, We're using the Splunk Python SDK to run queries in Splunk. However, we seem to be getting the results in a strange format, that isn't a valid JSON. For example: "_raw":"1618242600, s... See more...
Hi everyone, We're using the Splunk Python SDK to run queries in Splunk. However, we seem to be getting the results in a strange format, that isn't a valid JSON. For example: "_raw":"1618242600, search_name=\"Access - Geographically Improbable Access Detected - Rule\", orig_raw=\"01/12/2020 09:09:09 -0600, search_name=\\\"Access - Geographically Improbable Access - Summary Gen\\\", search_now=1618242600.000, info_min_time=1618177200.000, info_max_time=1618242600.000, info_search_time=1618242614.320, src=\\\"3.121.59.84\\\", dest=\\\"54.212.209.210\\\", user=DUMMY, speed=\\\"555.000\\\", src_app=splunk, src_lat=\\\"44.444\\\", dest_app=splunk, dest_lat=\\\"44.444\\\", distance=\\\"222.222\\\", src_city=Test, src_long=\\\"--88.888\\\", src_time=161800000, dest_city=TEST, dest_long=\\\"-120.000\\\", dest_time=161820000, src_country=\\\"United States\\\", dest_country=\\\"United States\\\", forceCsvResults=\\\"auto\\\"\", orig_time=\"161820000\", dest=\"1.1.1.1\", dest_app=\"splunk\", dest_city=\"TTTest\", dest_country=\"United States\", dest_lat=\"44.44444\", dest_long=\"-120.000\", dest_time=\"161820000\", distance=\"2400.00\", info_max_time=\"1618200000.000000000\", info_min_time=\"1618200000.000000000\", info_search_time=\"1618200000.000000000\", speed=\"555.444\", src=\"1.1.1.1\", src_app=\"splunk\", src_city=\"TeSt\", src_country=\"United States\", src_lat=\"44.4444\", src_long=\"-77.7777\", src_time=\"161820000\", user=\"DUMMY\"" I have some questions, and I will appreciate your help with them: 1. What is the reason for this situation? I would expect the "_raw" field to be in JSON format. I tried specifying output_mode as "json", but no luck. 2. Is there a common practice way for getting the raw data in a JSON format?  Thanks so much for the help!
Hi splunk community, I feel like this is a very basic question but I couldn't get it to work. I want to search my index for the last 7 days and want to group my results by hour of the day. So the... See more...
Hi splunk community, I feel like this is a very basic question but I couldn't get it to work. I want to search my index for the last 7 days and want to group my results by hour of the day. So the result should be a column chart with 24 columns. So for example my search looks like this: index=myIndex status=12 user="gerbert" | table status user _time I want a chart that tells me how many counts i got over the last 7 days grouped by the hour of the day for a specific user and status number. Cheers gerbert  
Hello, I am recieving the following warning on my alerts: Health Check: Detected deprecated Threat Intelligence Manager inputs that are not supported by Enterprise Security version 6.4.0 or highe... See more...
Hello, I am recieving the following warning on my alerts: Health Check: Detected deprecated Threat Intelligence Manager inputs that are not supported by Enterprise Security version 6.4.0 or higher. Recreate these inputs as Threatlist inputs or remove if unnecessary. Drill downing into the results of the deprecated inputs, I see the following: Which I have found them to be in the DA-ESS-ThreatIntelligence/local/inputs.conf file and disabled them via making the 0s to 1s in the "disabled" field section under each input. Do I have to completely remove/comment the inputs out? Why else would I still keep recieving alerts about it when it is disabled? Is there any where else I should be looking or changing for the deprecated intelligence inputs? Thanks, Best Regards,
I imported the lookup file using the app Lookup Editor. The record of the column whose column name is only numbers was NaN. (Normal values are values such as 1A or 4A etc, but those values are all ... See more...
I imported the lookup file using the app Lookup Editor. The record of the column whose column name is only numbers was NaN. (Normal values are values such as 1A or 4A etc, but those values are all NaN after import) Currently, we were able to update with the following measures. 1. Change the column name of only the numerical value of the file before import to a character string and import it. 2. After importing, change to the correct column name. Since this correspondence is a provisional correspondence, I would like to solve the problem that the record changes to NaN. Sorry, If there are people who know how to solve this problem, please reply. Lookup Editor version:3.4.6 build:1595011574 Kind Regards,
Hello, since daylight savings time is active we have a time offset for our events. For example, we use das splunk stream addon to ingest netflow data.  Within the Events, the timestamp is configur... See more...
Hello, since daylight savings time is active we have a time offset for our events. For example, we use das splunk stream addon to ingest netflow data.  Within the Events, the timestamp is configured "2021-04-13T05:32:31Z". For my understanding with Z for zulu (UTC) But if i search for events my _time is 07:32:31. two hours later.. Our timezone is Europe/Berlin. How can i get this fixed? In the sourcetype of stream_netflow is the timestamp configured to auto. The OS time from the indexer/search head or universal forwarder are correct to CEST and the time is also correct.   We have several other sourcestypes where the time offset is around 1 or 2 hours.
Hi, I want to know how we can change address of indexers for universal forwarders from deployment server as we have many UFs. Is there any way to change inputs.conf  of UFs instead of changing one b... See more...
Hi, I want to know how we can change address of indexers for universal forwarders from deployment server as we have many UFs. Is there any way to change inputs.conf  of UFs instead of changing one by one. Thanks, Shohre
selecting configuration templates like es makes other streams on stream forwarder being disabled even if they are enabled in stream app Distributed Forwarder Management. in Stream Forwarder Status i... See more...
selecting configuration templates like es makes other streams on stream forwarder being disabled even if they are enabled in stream app Distributed Forwarder Management. in Stream Forwarder Status in Admin Dashboard i see 16 streams for that stream forwarder and when i remove configTemplateName = es from streamfwd.conf in stream forwarder I streams that I have been selected in Distributed Forwarder Management show them self's again!!! whenever g configuration templates is enabled all dashboards are empty in stream app. do I have to got dedicated stream forwarder for just es templated and data input for Enterprise Security? isn't possible to have stream from stream app and es configuration template on same time in stream forwarder?!  
Hello everyone, I am now editing the pie chart section of the dashboard, I want to add a list of URLs to let click implementation to jump multiple interfaces at the same time. <DrillDown>     <con... See more...
Hello everyone, I am now editing the pie chart section of the dashboard, I want to add a list of URLs to let click implementation to jump multiple interfaces at the same time. <DrillDown>     <condition field = "a">         <link target = "_blank"> https://bilibili.com </ limited>     </ condition>     <condition field = "a">         <link target = "_blank"> https://baidu.com </ limited     </ condition> </ DrillDown> Below this section can only implement clicking a single page A, how to implement simultaneous jump to page A and page B or other interface? Please advise
열기 / 닫기 아이콘 index = "fw"src_ip = "192.168.10. *" | rex "192 \ .168 \ .10 \. (? <범위> \ d {1,3})" | 여기서 범위> = 11 AND 범위 <= 126 | 중복 제거 src_ip | stats count 나는 위의 명령에서 ip를 얻고 있습니다. ... See more...
열기 / 닫기 아이콘 index = "fw"src_ip = "192.168.10. *" | rex "192 \ .168 \ .10 \. (? <범위> \ d {1,3})" | 여기서 범위> = 11 AND 범위 <= 126 | 중복 제거 src_ip | stats count 나는 위의 명령에서 ip를 얻고 있습니다. 192.168.10.16, 192.168.10.21, 192.168.10.26, 192.168.10.31 ~ 192.168.10.126의 네 가지 IP를 살펴보고 싶습니다. 방법이 있습니까?
I'm using WMI to monitor when services are down, but noticed that the servers that don't use the Local System account don't report any data
We have some issues with line breaking such that we have events that often consist of multiple logical events, or they might consist of fragments of logical records.  We've tried a variety of fixes b... See more...
We have some issues with line breaking such that we have events that often consist of multiple logical events, or they might consist of fragments of logical records.  We've tried a variety of fixes but no joy so far.  Anyway, that's not really what my question is about.  I'm trying to do a Splunk search that finds only "good" events as in "Scenario 1" below, where the event begins with the XML tag <record> and ends with </record>.  There should be no other tags like this in the event, which would indicate an event like in "Scenario 2", which contains multiple logical events merged together.   Scenario 1:   <record> blah blah blah </record>   Scenario 2:   <record>blah blah blah</record> <record> blah blah blah</record> <record> blah blah blah</record>   I learned that this can be accomplished outside of Splunk using a "negative lookbehind".  I tried this in a Splunk search like the one below. | regex "(?s)^<record>((?!<record>)(\s|\S))*<\/record>$" I couldn't get this to work, however.  This does work as expected in regex101: Scenario 1 (match) Scenario 2 (no match) Any ideas?  I used the regex command instead of the rex command because I didn't need to extract anything.  Thanks in advance.
I have a list of source ip addresses in a csv file loaded into Splunk as a lookup file.  The file has a single field, src_ip, and about 4000 rows of unique ip address. I want to take the contents of... See more...
I have a list of source ip addresses in a csv file loaded into Splunk as a lookup file.  The file has a single field, src_ip, and about 4000 rows of unique ip address. I want to take the contents of the lookup file and compare each entry to a search of filewall logs and report the number of times each entry in the lookup file is present in the firewall data. I have this so far but the src_ip listed in the result is not always present in the lookup file.   index="firewall" src_ip!="192.168.0.0/16" | fields src_ip | append [ | inputlookup RYUK.csv | fields src_ip] | stats count by src_ip   Any suggestions greatly appreciated. Thanks Leigh
Hi, I have a problem about wrong written searches. In our system, there are so many users. Every user will be able to create their own alert and reports. In this part, We would like to take an actio... See more...
Hi, I have a problem about wrong written searches. In our system, there are so many users. Every user will be able to create their own alert and reports. In this part, We would like to take an action for wrong written searches. How can we write an spl in saved searches to see which reports or alerts written wrong. Then, we will use RestAPI to delete them for the health of the system. Thank you very much for your helps.
Hello Everyone, I am stuck at building a trending dashboard. My data in table format:- _time,  ProjectName, summary1, summary2 2021-04-06 05:41:30.027 ProjectA 121 173 2021-04-07 07:06:0... See more...
Hello Everyone, I am stuck at building a trending dashboard. My data in table format:- _time,  ProjectName, summary1, summary2 2021-04-06 05:41:30.027 ProjectA 121 173 2021-04-07 07:06:00.983 ProjectA 121 173 2021-04-08 02:30:47.883 ProjectA 121 173 2021-04-09 05:09:43.243 ProjectA 130 173 2021-04-10 12:07:51.513 ProjectA 130 173   I want to build a dashboard visualization such as a comparison for summary data and yesterday data/last weeks data, last month , last quarter data based on a input field.   So that we can derive what was the summary last week for project A, last month for project A, and so on   I tried  the search  | timechart avg(summary1), avg(summary2) by ProjectName span=w@w1 | timewrap 1mon
How are AWS logs get ingested into Splunk Enterprise or ES? Please advise the steps.
Where do I find a list of orphaned searches, Reports and Alerts so they an be deleted or disabled? For the purpose of using my resources for no reason ?
I am utilizing the Website Monitoring app and was curious if I had something wrong in my head on the credentials. I always thought that if you did not put any credentials into the Website input then... See more...
I am utilizing the Website Monitoring app and was curious if I had something wrong in my head on the credentials. I always thought that if you did not put any credentials into the Website input then it would just utilize the credentials that the splunk service runs under for the system (on a Windows server, that would be the service account running the splunkd service).  Was my thinking incorrect in this? I am asking as I have a website I am checking that my service account that runs Splunk also has access to the webpage.  When I made the input and did not put any credentials into the input then the check results were always coming back as 401.  After I went in and put the service account credentials into the website check credentials section and let it run it is now coming back with 200. If this is what is needed then we can do that but one of the main reasons we were working with teams to get them to grant the permissions to our Splunk service account was to not have to always be inputting the credentials into all of these if it would just use what Splunk runs its process as. We are currently using version 2.74 of the app.
How to assign a Splunk server multiple roles? For example how can I assign my License Master to also be a Search head?