All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I have a column containing Request = REQ_IN ...... { ...... "productId": "test", ...... { ....... "productId": "test2" }} I have to take the containing value in the first one productId tes... See more...
Hi all, I have a column containing Request = REQ_IN ...... { ...... "productId": "test", ...... { ....... "productId": "test2" }} I have to take the containing value in the first one productId test I using = | rex field=Request "REQ_IN.*\"productId\"(?<productId_rex>[^,]*)" but it returns me the second value test2 how can i solve? Simone
When using stats count on searches, it does not show zero values on specific time intervals. Example: index=main sourcetype=test (event=eventA OR eventB) | bin _time span=1h | stats count by _tim... See more...
When using stats count on searches, it does not show zero values on specific time intervals. Example: index=main sourcetype=test (event=eventA OR eventB) | bin _time span=1h | stats count by _time, event Sample Result: _time Event count 04/27 1:00AM EventA 10 04/27 2:00AM EventA 10 04/27 1:00AM EventB 10 How can I show row with zero value?  _time Event count 04/27 1:00AM EventA 10 04/27 2:00AM EventA 10 04/27 3:00AM EventA 0 04/27 1:00AM EventB 10 04/27 2:00AM EventB 0 04/27 3:00AM EventB 0  
Hi there, so I have a search that results contains multiple occurences of one field. My current solution is using rex together with max_match=0 in order to get this:   index="dev_logs" pod::apoll... See more...
Hi there, so I have a search that results contains multiple occurences of one field. My current solution is using rex together with max_match=0 in order to get this:   index="dev_logs" pod::apollo* some.url.com/api statusCode | rex field=_raw max_match=0 "\"statusCode\":(?<ApolloStatusCode>\d+)"    Well, right now I want an alert for the case that status is neither 200 nor 204. So I played around with this:   | search 200 OR search 204 | search NOT 200 AND search NOT 204 | search NOT [search 200 OR search 204]   To be honest neither works   Right now I think that the sub-search is the problem, and a solution could be to use field-extraction. So I used the field extraction wizard and changed the generated regex to this afterwards:   "statusCode":(?<ApolloStatusCode>\d+)   But this only returns the first occurence - but I need them all.   With field transformation I didn't make any progress, and editing some conf files are out of scope...   Thanks for any help, Marco
Hello there   So, what I'm trying to do is the following. I have inside the log all the slow queries. I'm trying to create a chart to get from the timing of the slow queries, grouped by 10. O... See more...
Hello there   So, what I'm trying to do is the following. I have inside the log all the slow queries. I'm trying to create a chart to get from the timing of the slow queries, grouped by 10. One bar counting the queries from 0 to 10 sec the other from 10 to 20 and so on...   What I've done so far was that: index="SUG" "slow query" | rex field=_raw "Slow Query (time: (?<OSY_timing>.*)s):" | eval OSY_new=round(tonumber(OSY_timing),-1) | stats count by OSY_new   Unfortunately, I'm not able to see any results inside OSY_new, where I should expect the values rounded by 10 (if I read the documentation correctly). Any hint on how to do that?   Thanks.   P.s. I've the correct values inside OSY_timing
Hi, I have multiple panels in my dashboard but my line chart and pie chart color in the dashboard gets faded. Could you please let me know any fix for this. I tried multiple option but not sure why ... See more...
Hi, I have multiple panels in my dashboard but my line chart and pie chart color in the dashboard gets faded. Could you please let me know any fix for this. I tried multiple option but not sure why its getting fade. When the dashboard gets loaded initially then the actual color in the line chart is good like image 1. But after few seconds it fades like image 2.    
sample event 1: id:12345 fcount:20 component:value1 time:2021:04:26 sample event2: id:12346 fcount:200 component:value2 time:2021:04:26 sample event 3: id:12347 fcount:20 component:value... See more...
sample event 1: id:12345 fcount:20 component:value1 time:2021:04:26 sample event2: id:12346 fcount:200 component:value2 time:2021:04:26 sample event 3: id:12347 fcount:20 component:value1 time:2021:04:27 sample event 4: id:12348 fcount:200 component:value3 time:2021:04:27 i have list of values for my field "component" , lets say i have value1 , value2 , value3... value15 , which are coming part of my events for given a day my events are related only to components of value1, value2 , i want to display the components which are not received for the particular given day Example : in above 4 events on 2021:04:26 i got components of value1 and value2 where i didnt receive components for value3..to value 15 , i want to display them which not received
Hi Team, My Query : index=*** kubernetes.container_name=*** cluster_id=*** "Number of Files Found" Result will be like : Number of Files Found  2(or any number) I need to get that number value a... See more...
Hi Team, My Query : index=*** kubernetes.container_name=*** cluster_id=*** "Number of Files Found" Result will be like : Number of Files Found  2(or any number) I need to get that number value alone, when it is > 0 the count have to be displayed as any chart. How can I edit my query to get like that. Do we have any option for that? Please suggest. Thanks! 
HI team We were  analysing splunk tool for a while. We we very much impressed with the features available. Still we need to double check the entire features before purchasing. Based on our an... See more...
HI team We were  analysing splunk tool for a while. We we very much impressed with the features available. Still we need to double check the entire features before purchasing. Based on our analysis we concluded the following SL.No Features SPLUNK 1 Threat Intelligence Yes 2 Behavior Profiling Yes 3 Data and User Monitoring Yes 4 Application Monitoring Yes 5 Analytics Yes 6 Log Management and Reporting Yes 7 Custom Dashboards Yes 8 Automatic Network Discovery Yes 9 Cloud Services Monitoring Yes 10 SNMP Support No 11 Active Directory/LDAP Integration Yes 12 Agentless Support Yes 13 Failover Mechanism Yes 14 Network Traffic Analysis Yes 15 MoM - Monitoring Tool Integration No 16 ITSM- Event, Alert and Incident Management Yes 17 Self Service Portal Yes 18 Dynamic Threshold(AI) Yes 19 Data Prediction(AI) Yes 20 NoSQL Monitoring Yes 21 Multi Location Support Yes 22 Virtualization Monitoring Yes 23 SQL Monitoring Yes 24 Open Source No 25 Security Information and Event Management (SIEM) Yes 26 Correlation Yes 27     28     29     30       Are we missing something, Do u have any more features other than this? Please reply asap Regards Shijin Thomas
Greetings!!   Updating Linux OS version(Centos)  will not affect Splunk operations? I want to update my OS to the latest version, I wonder if this will not affect the Splunk operations or run into ... See more...
Greetings!!   Updating Linux OS version(Centos)  will not affect Splunk operations? I want to update my OS to the latest version, I wonder if this will not affect the Splunk operations or run into the issue? I need your advice and guidance, Thank you in advance!
Hi, New to Splunk so I must be missing something obvious. I looked through previous questions and the docs but didn't see anything about this problem. I have a Splunk query which is correctly retur... See more...
Hi, New to Splunk so I must be missing something obvious. I looked through previous questions and the docs but didn't see anything about this problem. I have a Splunk query which is correctly returning the statistic I am expecting but has Events(0) even when Verbose Mode is enabled.  index=* // search redacted |rex "(?P<json>{.+})" |table json |spath input=json |stats sum(Response.Selected) When I execute this query from Splunk Search (Web UI) I am getting the correct sum under the "Statistics" tab but the "Events" tab is (0).  Any help would be much appreciated.  
Hello all, Hello all, In the image above given my add on's dashboard , you can see a panel named: "Logins by country"  it is showing count of login events grouped by country. But.. the problem is... See more...
Hello all, Hello all, In the image above given my add on's dashboard , you can see a panel named: "Logins by country"  it is showing count of login events grouped by country. But.. the problem is that when I click on any country cell (say United States) it is redirected to a next page but don't show any data. Take a look in image below:   Here I can see clearly it is because splunk is forming wrong query.  Any fixes on this issue ? How can I get data by clicking country cell?
I have the following props configuration:   [log_files] SHOULD_LINEMERGE = false NO_BINARY_CHECK = true TRUNCATE = 0 KV_MODE = true pulldown_type = true TRANSFORMS_FIELDS = data,time TIME_FORMAT = ... See more...
I have the following props configuration:   [log_files] SHOULD_LINEMERGE = false NO_BINARY_CHECK = true TRUNCATE = 0 KV_MODE = true pulldown_type = true TRANSFORMS_FIELDS = data,time TIME_FORMAT = %Y-%m-%d %H:%M:%S   My log files contains IIS logs as follow:   2020-01-22 12:00:37 ::1 GET /test - 80 ::1 Mozilla/5.0+(Windows+NT+6.1;+Win64; x64;+rv:47.0)+Gecko/20100101+Firefox/47.0 - 200 2 5 100   Splunk indexing this file with incorrect time, I got event with time 15:00:07 instead 12:00:37 (and I see another field date_zone=-180), How can I make splunk index event with original the time from the logs file? NOTE: I don't know the logs timezone .
Hi All, I have a scenario where there are many servers having the same hostname due to some requirements of the applications running on them. Now, the splunk universal forwarder agent has been succ... See more...
Hi All, I have a scenario where there are many servers having the same hostname due to some requirements of the applications running on them. Now, the splunk universal forwarder agent has been successfully deployed on all of them and the inputs.conf and outputs.conf have been manually configured there. These are windows servers.  It's very difficult to manage all of these servers manually by editing the inputs.conf so what I'm trying to do is manage them centrally via the Deployment Server. However after the deploymentclient.conf file has been configured there, all of the servers are not showing up on the Deployment Server because of them having the same hostname. I get one entry against the hostname on the Deployment Server. My question here is that what changes do I need to make so that all of them report successfully on the Deployment Server? I've been thinking of pushing a deploymentclient.conf file via the Deployment Server with the clientName value set to $HOSTNAME-$IPADDRESS. Is this possible? What other environment variable can I use other than $HOSTNAME to make the clientName unique? Lastly, when the logs are being received in Splunk, the host value that shows up there has been manually set for each server in the inputs.conf file with HOSTNAME-IP Address, so when I remove the manual configurations and push the inputs.conf via deployment server, will the host = $HOSTNAME-$IPADDRESS work? Thank you. 
Hello, I want to make the following search: index = "myIndex" myfield != "35*" Is there a way to excluse all values of myfield that start with "35" except "35" itself. so for example i want to exc... See more...
Hello, I want to make the following search: index = "myIndex" myfield != "35*" Is there a way to excluse all values of myfield that start with "35" except "35" itself. so for example i want to exclude: myfield values: 35457, 35568, 351 but not 35 itself.   I know that in regex you can use "+" to indicate the use of "one or more" matches but I don't know how to use it in a splunk search.   Cheers Fritz
i am preparing a Splunk dashboard .in my dashboard i fixed the timestamp at the starting of the dashboard and all the data will be displayed with that. now i need to fix another time chart separately... See more...
i am preparing a Splunk dashboard .in my dashboard i fixed the timestamp at the starting of the dashboard and all the data will be displayed with that. now i need to fix another time chart separately only for one chart that represents incoming data of Previous data at this timestamp. How to fix this.
We have some scheduled pdf reports with Traditional Chinese characters in them, but the default font (MSung-Light) for Traditional Chinese characters that used in pdf generate is not built-in in Ado... See more...
We have some scheduled pdf reports with Traditional Chinese characters in them, but the default font (MSung-Light) for Traditional Chinese characters that used in pdf generate is not built-in in Adobe Acrobat Reader, And our company 's policy don't allow us to download any font package from internet. In Splunk 7.3.3, we can put windows-built-in font (PMingLiU.ttf) in to $SPLUNK_HOME/share/splunk/fonts/  and splunk will use it instead of default font (MSung-Light) in pdf generation, therefore, we can read the pdf report without download any font package.   But when we upgrade to Splunk 8.1.3, It's not work anymore, all the Traditional Chinese characters in pdf is all garbled, How can we solve this issue?
I was asked if IOC information from Splunk Enterprise Security could be used as a dataset. For example, is it possible to use it as follows? ・ Call SplunkES IOC information with SPL and display a l... See more...
I was asked if IOC information from Splunk Enterprise Security could be used as a dataset. For example, is it possible to use it as follows? ・ Call SplunkES IOC information with SPL and display a list ・ Detect SplunkES IOC information by comparing it with IPs or domains included in various logs. And,What kind of IOC information does SplunkES have (IP address, UserAgent, domain information, etc.)? Can you tell me if there is a description somewhere? Thank you.
Hello all, Its the first time I actually post a question in here, since most topics are documented quite well and many questions have already been asked and answered. However I finally found an issu... See more...
Hello all, Its the first time I actually post a question in here, since most topics are documented quite well and many questions have already been asked and answered. However I finally found an issue that I cannot find any answer to.... I guess that splunk is not designed for that but I nevertheless want to build sth. like this: I´m currently building a dashboard that serves besides other purposes as a documentation site for adding new values or modifying them (in a csv lookup file). The issue I now got is that although creating a query for creating new entries (via | makeresults... etc.) and a separate one for modifying existing entries, Its not possible for me to combine them into one and switching inbetween the two functions based on a value provided by an input field. I´ve so far tried the following as a "switch function":       | eval var=case(switch="yes","| append [| makeresults | eval ExternalId=",switch="no","| search ExternalId=",1==1,"| append [| makeresults | eval ExternalId=")       In a second attempt I´ve put the whole case dependant part into the variable, eg.:       | append [| makeresults | eval DisplayName="$displayname$" | eval ExternalId="$location$" | eval Address="$address$" | eval Location_type="$location_type$" | eval Primary_contact="$primary_contact$" | eval Secondary_contact="$secondary_contact$" | eval Regional_manager="$regional_manager$" | eval spoc="$spoc$" | eval subnets="$subnets$"]       However in this case splunk takes the variable references as literates and creates an entry that looks as follows: $displayname$ $location$ $address$ $location_type$ $primary_contact$ $secondary_contact$ $regional_manager$ $spoc$ $subnets$   I´ve tried the known escape chars etc. but nothing worked. Do you have any Ideas on how to solve this issue?   Many thanks ahead.
I want to run a search query but the _bin span value will change based on the field values. Example: Instead of using this,  index=main sourcetype=test (host=hostnameA OR hostnameB OR hostnameC)  ... See more...
I want to run a search query but the _bin span value will change based on the field values. Example: Instead of using this,  index=main sourcetype=test (host=hostnameA OR hostnameB OR hostnameC)  | bin _time span=1h | stats count by _time, host for hostnameA -> I want the span value to be every 10m for hostnameB -> every 30m for hostnameC -> every 1h Thanks!
I'm running into a strange issue with checkpointing--and it seems to have to do with the JSON array returning events in no clear order. The REST URL I'm querying looks like this: https://RESTURL.co... See more...
I'm running into a strange issue with checkpointing--and it seems to have to do with the JSON array returning events in no clear order. The REST URL I'm querying looks like this: https://RESTURL.com/api/incidents?updated_after=2021-04-25T12:00:00Z  Sample output:     [ { "id":847, "summary":"test", "updated_at":"2021-04-25T12:23:57Z" } { "id":842, "summary":"test 2", "updated_at":"2021-04-26T14:44:55Z" } ]     If I try to use the "updated_at" time from the last event--using a "Checkpoint field path" like [-1].updated_at--the same event often stays as the last event in the array, even if there are others that are more up-to-date. So the checkpoint doesn't increment. (same issue if I try [0].updated_at ) So with something like the example above, the app will keep querying for updated_after=2021-04-26T14:44:55Z until the order of events happens to randomly change down the line. Is there a way to use either "JSON path" or "Checkpoint field path" to find the event with the most recent "updated_at" time--and use that as the next checkpoint? Unfortunately there aren't any parameters (per the data source's API documentation) I can use in the REST URL to sort the JSON array. Any help would be great. Thank you.