All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If space is not really an issue are there any other reasons to have the search factor lower that the replication factor? Thanks.
Hi All, how can i know whether props has been defined to particular sourcetype. how can i check it.
I need to create a pie chart from two different searches/indexes. I gave two separate queries that show the total number from my search results.   Query 1: index="first_index" | stats count... See more...
I need to create a pie chart from two different searches/indexes. I gave two separate queries that show the total number from my search results.   Query 1: index="first_index" | stats count by auth.metadata.role_name | rex field=auth.metadata.role_name | dedup auth.metadata.role_name | stats count  Query 2: index="second_index" sourcetype="mysource" (request.path="my/path/*" OR request.path="my/path/sign/*") NOT (request.path="not/my/path" OR request.path="also/not/my/path") response | eval expired=if((now() > 'response.data.expiration'),1,0) | table _time, request.data.common_name, expired, auth.metadata.role_name | rename request.data.common_name as cn | search "auth.metadata.role_name"="my_role_name" | table cn | dedup cn | stats count   Where query 1 is = 100%   How can I make query 2 show a percentage using query 1 as the 100% i.e if query 1 stats count  = 150 and query 2 stats count  = 75 then query 2 should print 50 %   Thanks
When I generate notable "for each result" the max number of notables is 250 or 500 I want all results to produce an notable
This is my sample event onlinequoteinguser 2023-01-11T10:27:13,843 INFO DigitalPortal.xxxeSubmissionUtil {"hostName": "xxx80hlxxda044", "SourceSystem": "null", "level": "INFO", "message": "Start... See more...
This is my sample event onlinequoteinguser 2023-01-11T10:27:13,843 INFO DigitalPortal.xxxeSubmissionUtil {"hostName": "xxx80hlxxda044", "SourceSystem": "null", "level": "INFO", "message": "Start | newSubmission", "serverId": "prod-xxx_xx78", "userId": "onlinequoteinguser", "contextMap": [ {"JsonRpcId":"b55296cf-81e1-4xxx-8064-052dxx416725_5"}, {"methodName":"createOrUpdateDraftSubmission"}, {"traceabilityID":"7cxxx367-09aa-4367-87d4-b120526xxxcb"}, {"requestPath":"\/edge\/xxxquoteflow\/letsgetstarted"}], "applicationName": "xx", "timestamp": "20230111T102713.841-0500"} here is my query to retrieve specific event based my my JSON field  index=app_xx Appid="APP-xxxx" Environment=PROD "contextMap{}.methodName"="createOrUpdateDraftSubmission"    How to make appropriate search?
I've been trying to find an answer to this, and it seems like it's supposed to work. So I'm not sure if I have a misconfiguration or if I'm doing something wrong. I have a deployment server setup ... See more...
I've been trying to find an answer to this, and it seems like it's supposed to work. So I'm not sure if I have a misconfiguration or if I'm doing something wrong. I have a deployment server setup to deploy 1 or 2 apps.  If I go to a machine that has the deployment server configured, and has had the configured app installed (Through the deployment server), and I delete configuration files, etc; the files never get replaced by the deployment server. It will replace files if i delete the entire app folder... but not individual files.   Would anyone have any clues on why this is happening? I have tried the "reload deploy-server" command but it didn't seem to do anything.  Am I being unrealistic to assume individual files would also be checked against the deployment server app?  I want to ensure that inputs, outputs, etc are uniform. And if some were to get deleted or changed, I would need the change to get pushed; for example if someone went in and deleted conf files intentionally or accidently.
How do I find out how many heavy forwarder licenses/instances I can install?
I have written an addon that gets data from an API and yields rows from it one by one in a loop. I use the GeneratingCommand class from splunklib. When I run it in the search-head, it runs for a time... See more...
I have written an addon that gets data from an API and yields rows from it one by one in a loop. I use the GeneratingCommand class from splunklib. When I run it in the search-head, it runs for a time, and then returns all the rows at once. So it seems that Splunk buffers the results until the whole process is complete, even though the addon code itself has no such buffering. Is it possible to have it show the rows in the search-head as they are yielded, similar to how a normal Splunk search does?
We are currently experiencing an issue in our 9.0.2 environment where our syslog UFs are unable to connect to our indexers. When we take a look at the splunkd.log on our syslog servers we see: WARN ... See more...
We are currently experiencing an issue in our 9.0.2 environment where our syslog UFs are unable to connect to our indexers. When we take a look at the splunkd.log on our syslog servers we see: WARN AutoLoadBalancedConnectionStrategy [3438113 TcpOutEloop] - Cooked connection to ip=xxx.xxx.xxx.xxx:9997 timed out These servers are in the same VRF so there is no firewall in-between, we have useACK and autoBatch set to false for the 9.x workaround, and the indexers are receiving all data from our non-syslog UFs. These syslog servers had been working just fine up until a day or two ago. If anyone has additional t/s suggestions that'd be much appreciated
Hello Everyone, We have query regarding the cold storage and archive is it possible to archive the splunk logs in single site in distributed multisite environment, If yes can you share the any docu... See more...
Hello Everyone, We have query regarding the cold storage and archive is it possible to archive the splunk logs in single site in distributed multisite environment, If yes can you share the any documents to configure the same. At one of customer,  do not want the cold storage is It possible to configure the cold storage only at DC and at DR we can skip and we just need configure in hot storage path in indexes.conf. Let us know if it’s possible to configure above mentioned scenario
Hi, suppose I have a multi-value field which represents names, which can have different values in each event. for example: names (ordered by time desc): event 1: Emma, Dan, Mike event 2: Dan, Pat... See more...
Hi, suppose I have a multi-value field which represents names, which can have different values in each event. for example: names (ordered by time desc): event 1: Emma, Dan, Mike event 2: Dan, Patrick event 3: Mike, Olivia In addition, I have another multi-value field which represent the correspond people's grades (correspond by order): grades (ordered by time desc): event 1: 80, 70, 100 event 2: 90, 75 event 3: 88, 95 I would like to take for each person his last grade (i.e take all the ever seen people without duplications). My result should look like: Emma 80 Dan 70 Mike 100 Patrick 75 Olivia 95
Hi, Is it possible to have the annotation for only one chart when view by Trellis, for example in the image below, I only want to have notation for the first chart. Knowing that: - I'm usi... See more...
Hi, Is it possible to have the annotation for only one chart when view by Trellis, for example in the image below, I only want to have notation for the first chart. Knowing that: - I'm using the visualization by Trellis. - The annotation is now established by adding this part in the XML source.     <search type="annotation"> <query>| makeresults | eval _time="2022-09-20 00:00:00", message="Change equipement", type="type1" | eval annotation_label = message </query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.annotation.categoryColors">{"type1":"0xffcc00"}</option>     - The list of id with the date of changement is stored in a lookup csv I did this but it show annotation for every chart   <search type="annotation"> <query>|inputlookup list_id.csv |search NoEq=$id$ | eval _time=_time, message="Change", type="type1" | eval annotation_label = message </query> <earliest>0</earliest> <latest></latest> </search>  
We have a stand alone Splunk v8.2.0 deployment. 16vcpu/32GB memory. From other posts here I started looking through the MC and in "Indexing Performance: Advanced" I found that the "summarydirectors... See more...
We have a stand alone Splunk v8.2.0 deployment. 16vcpu/32GB memory. From other posts here I started looking through the MC and in "Indexing Performance: Advanced" I found that the "summarydirectorsearchexecutorworker" is constantly running above 92% 100% of the time. How can I bring this down?
I am aware that it is pretty simple to programmatically add new apps to Splunk's Deployment Server, as you must simply drop them in the right directory and invoke a reload of the service. For server... See more...
I am aware that it is pretty simple to programmatically add new apps to Splunk's Deployment Server, as you must simply drop them in the right directory and invoke a reload of the service. For server classes however, this does not seem so trivial.  The official documentation asks us to add apps to a server class manually, from the user interface.  This I don't like much as it is error prone and non reproducible. Is there any way (or magic location in the Deployment Server's filesystem, like for the apps), where I can drop the configuration of a server class, for it to be loaded in the graphical UI automatically?  This will allow me to configure some simple Ansible/Puppet/Chef... task to take care of those files.
I don't seem to be able to integrate two radio buttons on a single dashboard to achieve the following selection logic.  Each radio button has two options and an "OR" condition on the 2nd radio button... See more...
I don't seem to be able to integrate two radio buttons on a single dashboard to achieve the following selection logic.  Each radio button has two options and an "OR" condition on the 2nd radio button.  The choice from the 1st radio button drives the 2nd radio button to display 1 of 2 different ways.  Drawing of selection logic follows:                                                                         RB1                                                           State1        State2                                                 RB2                 OR                 RB2                                  State1a       State1b            State2a         State2b Explanation of drawing: In RB1, if State1 is selected, RB2 displays State1a and State1b for selection OR if State2 is selected in RB1 then RB2 displays State2a and State2b for selection. Is this possible, if so, how can it be accomplished on a dashboard in simple XML?
Hi, I have below splunk command: | makeresults | eval _raw="The first value is 0.00 and The second value is 0\",\"origin\":\"rep\",\"source_instance\":\"0\"" | rex "The\sfirst\svalue\sis (?<from>... See more...
Hi, I have below splunk command: | makeresults | eval _raw="The first value is 0.00 and The second value is 0\",\"origin\":\"rep\",\"source_instance\":\"0\"" | rex "The\sfirst\svalue\sis (?<from>.*) and\sThe\ssecond\svalue\sis (?<to>.*)"   This shows the "from" field as 0.00 and "to" field as 0","origin":"rep","source_instance":"0" In the "to" field I only want the value 0. How do I achieve that?
Good morning\afternoon\evening community! I've met an issue with detecting vpn tunnel interface statuses which is identified by ping data inputs can you give some ideas on how to organize the se... See more...
Good morning\afternoon\evening community! I've met an issue with detecting vpn tunnel interface statuses which is identified by ping data inputs can you give some ideas on how to organize the search to print table like below ? on first table represented the logic of detecting the status of tunnel   Thanks in advance, for any response!    
I need to create a alert for service for but real time alert are disabled by admin, now i need to create a alert that if my service got bad service alert more then 5 it will send me mail immediately,... See more...
I need to create a alert for service for but real time alert are disabled by admin, now i need to create a alert that if my service got bad service alert more then 5 it will send me mail immediately, i created alert but alert is sending email at the end of time range cycle like in cron expression i set Time range:- "last 30 minutes"  Cron expression :- */30 * * * * expires in 24 hours it is running and giving email also but not on alert time but at the end of cycle after 30 min, is there any way to make it trigger alert on same time as alert coming. Please help me...
Good evening, We are unable to send data to the Splunk Cloud trial instance. To send data to http event collector , we referred this document and sent a post request to the URL https://http-inp... See more...
Good evening, We are unable to send data to the Splunk Cloud trial instance. To send data to http event collector , we referred this document and sent a post request to the URL https://http-inputs-<hostname>.splunkcloud.com:8088/services/collector/event from postman and got "getaddrinfo ENOTFOUND" as error.  On trying post request to the URL https://<hostname>.splunkcloud.com:8088/services/collector/event , we got "Error: Request timed out".  Is the documentation wrong? How do I get this working?
I can not access splunk UBA web interface, It's on a single Linux server and pass the precheck and setup is done , I check the caspida status its going OK.  Is it because of firewall rule, docker or... See more...
I can not access splunk UBA web interface, It's on a single Linux server and pass the precheck and setup is done , I check the caspida status its going OK.  Is it because of firewall rule, docker or another ? please help me.  browser : chrome / firefox my os : RHEL 8.5 UBA version : 5.1.0 network eth0   thanks for help   UPDATE ! This is my firewall warning