All Topics

Top

All Topics

Hello, While using sitimechart instead of timechart - The data has been changed. I would like to calculate an error percentage but the system shows 0 or fields count. Thanks!    
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see ... See more...
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see if this would fix it. It did not. then we upgraded to 9.2, that did not fix it. I started adding debug mode to everything and found that splunk would send the emails to postfix and the postfix logs would state the emails were send. however, after looking at it closer, I notice the from field of the splunk sendemail generated emails had the from field like: splunk@prod not splunk@prod.mydomain.com (as they used to before we upgraded to redhat 9 When we use mailx, the fron field from field is constructed correctly such as: splunk@prod.domain.com extra python debugging does not show the from field but only the user and the domain: from': 'splunk', 'hostname': 'prod.mydomain.com', My stanza in /opt/splunk/etc/system/local/alert_action.conf: [email] hostname = prod.mydomain.com Does anyone know how to fix this? Is there a setting in splunk that would make sure the email from field is constructed correctly. It is funny that if you add an incorrect "to" address splunk whines but if splunk create a incorrect to field address in sendemail, it is fine and, just send it to postfix and let it handle it, lol dandy  
Hello, I am not getting emails regarding the alerts I have setup in splunk ...Can anyone please help?
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My r... See more...
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My reading on index clusters has me thinking one needs at a minimum 3 licensed Splunk instances. At least, that's what I got from Splunk's documentation. You need one master, and at least 2 dedicated indexer peers. Where the search head goes in all of that and how that would be supported, I have no clue. I'm sure everyone can think of a very green reason as to why one would want to be able to just have a pair of indexers serve high availability without being forced into an index cluster kind of deployment. I can see older posts where apparently this used to be supported but my understanding now is that the only Splunk supported high-availability deployment is via index clusters. Can anyone confirm?
Hello, I am trying to troubleshoot sendemail.py since after an upgrate to red hat 9 our splunk stopped sending emails. I understand the command to use the splunk python interpreter in the cli is: ... See more...
Hello, I am trying to troubleshoot sendemail.py since after an upgrate to red hat 9 our splunk stopped sending emails. I understand the command to use the splunk python interpreter in the cli is: splunk cmd python /opt/splunk/etc/apps/search/bin/sendemail.py however, how do i combine the above with the below _internal search results so i can see what the interpreter would provide as feedback (such as errors).   _raw results of a sendemail: subject="old: : $: server-prod - AlertLog_Check - 4 Log(s) ", encoded_subject="old: : $: server-prod - AlertLog_Check - 4 Log(s) ", results_link="https://MyWebsite:8080/app/search/@go?sid=scheduler__nobody__search__RMD50fd7c7e5334fc616_at_1712993040_1213", recipients="['sysadmin@MyWebsite.com']", server="localhost"   any examples would e greatly apreciated, thanks, A totally blind Splunker with a mission
  <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> index=aaa (source="/v... See more...
  <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> index=aaa (source="/var/log/testd.log") |stats count by host | eval label=case(match(host, ".*tv*."), "Test", match(host, ".*qv*."), "QA", match(host, ".*pv*."), "Prod")| dedup label</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input>   dropdownlist binding with TEST, QA and PROD In QA and prod have 3 host. If i select QA from dropdown list , will the search includes from all the three hosts? could you plase confirm
Hi Team, I need to extract the values of the fields where it has multiple values. So, I used commands like mvzip, mvexpand, mvindex and eval. However the output of my spl query is not matching with... See more...
Hi Team, I need to extract the values of the fields where it has multiple values. So, I used commands like mvzip, mvexpand, mvindex and eval. However the output of my spl query is not matching with the count of the interesting field. Could you please assist on this? Here is my SPL query and output screenshots below. index="xxx" sourcetype="xxx" source=xxx events{}.application="xxx" userExperienceScore=FRUSTRATED | rename userActions{}.application as Application, userActions{}.name as Action, userActions{}.targetUrl as Target_URL, userActions{}.duration as Duration, userActions{}.type as User_Action_Type, userActions{}.apdexCategory as useractions_experience_score | eval x=mvzip(mvzip(Application,Action),Target_URL), y=mvzip(mvzip(Duration,User_Action_Type),useractions_experience_score) | mvexpand x | mvexpand y | dedup x | eval x=split(x,","), y=split(y,",") | eval Application=mvindex(x,0), Action=mvindex(x,1), Target_URL=mvindex(x,2), Duration=mvindex(y,0), User_Action_Type=mvindex(y,1), useractions_experience_score=mvindex(y,2) | eval Duration_in_Mins=Duration/60000 | eval Duration_in_Mins=round(Duration_in_Mins,2) | table _time, Application, Action, Target_URL,Duration_in_Mins,User_Action_Type,useractions_experience_score | sort - _time | search useractions_experience_score=FRUSTRATED | search Application="*" | search Action="*" Query Output with the statistics count:   Expected Count:    
Hello All,   I want to build a splunk query using stats to get count of messages for last 5 min, last 10min and last 15min.Something like below. Kindly let me know how below can be achieved?    T... See more...
Hello All,   I want to build a splunk query using stats to get count of messages for last 5 min, last 10min and last 15min.Something like below. Kindly let me know how below can be achieved?    Transaction       Last 5min Vol        Last 10min Vol       Last 15min Vol A B C
Hello Splunkers!! I want to work Splunk on https. I am using windows server. How to generate certificate in Splunk and Trustore in some easy steps available? I followed below document but not givi... See more...
Hello Splunkers!! I want to work Splunk on https. I am using windows server. How to generate certificate in Splunk and Trustore in some easy steps available? I followed below document but not giving any good results.   https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoself-signcertificates  
index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to AT... See more...
index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" | chart count over RampdataSet by Message OUTPUT: RampdataSet Initial message received with below details Letter published correctley to ATM subject Letter published correctley to DMM subject Letter rejected due to: DOUBLE_KEY Letter rejected due to: UNVALID_LOG Letter rejected due to: UNVALID_DATA_APP WAC 10 0 0 10 0 10 WAX 30 15 15 60 15 60 WAM 22 20 20 62 20 62 STC 33 12 12 57 12 57 STX 66 30 0 96 0 96 OTP 20 10 0 30 0 30 TTC 0 5 0 5 0 5 TAN 0 7 0 7 0 7 But we want output as shown below: Total="Letter published correctley to ATM subject" + "Letter published correctley to DMM subject" + "Letter published correctley to DMM subject" + "Letter rejected due to: DOUBLE_KEY" + "Letter rejected due to: UNVALID_LOG" + "Letter rejected due to: UNVALID_DATA_APP" |table "Initial message received with below details"  Total RampdataSet Initial message received with below details Total WAC 10 20 WAX 30 165 WAM 22 184 STC 33 150 STX 66 222 OTP 20 70 TTC 0 15 TAN 0 21
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I t... See more...
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I tried to create a Connection, it fails with a authentication warning.   My solution for now was to edit the JDBC URL manually via the interface and add in the user and password parameters as shown below. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=myusername&password=mypassword The connection now works and proves out that the JDBC driver and credentials are working. This isn't a proper solution though as anyone with administration privileges in DB Connect is able to see the username and password if they edited that connection. Any ideas on how to make a Vertica JDBC connection utilize the Identity set up? The jdbcUrlFormat in the configuration is: jdbc:vertica://<host>:<port>/<database> I was wondering if one solution is a way to reference the Identity here. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=<IdentityUserName>&password=<IdentityPassword> I have tried similar things and that doesn't work either.
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2... See more...
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2Fexport search command : | inputlookup some_inputlokupfile.csv | rename user as CUSTOMER,  zone as REGION, "product"  as PRODUCT_ID | fields CUSTOMER*, PRODUCT_ID | outputlookup some_example_generated_file.csv.gz override_if_empty=false   when i execute the curl it returns success 200 but file is not created. is it possible to invoke multiline search command using pipe with this or any other search api? the search is dynamic i cant create savedsearch and execute.      
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(h... See more...
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(host, "*10qe*"), "QA", match(host, "*10qe*"), "Prod" )  
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route fro... See more...
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route from/to all containers (via port 8099) and there are no blocks and no issues... But when i try to add on of the containers splunk as a search peer in a distributed search deployment, i always receive the error "Error while sending public key to search peer" Any suggestion about this? Thank to everybody in advance.
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, eve... See more...
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, even when setting the difference to 1% which should always return results. index=OMITTED source=OMITTED host="SERVER1" OR host="SERVER2" | stats max(Value) as Value by host | eventstats max(if(host='SERVER1', Value, null)) as server1_value max(if(host='SERVER2', Value, null)) as server2_value | eval value_difference = abs(server1_value - server2_value) | eval value_percentage_difference = if(coalesce(server1_value, server2_value) != 0, (value_difference / coalesce(server1_value, server2_value) * 100), 0) | where value_percentage_difference > 1 | timechart avg(value_percentage_difference)
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for hos... See more...
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for host index=aaa(source="/var/log/tes1.log" |stats count by host  
index keeps rolling of data due to size even after size has been increased. Is there another way to resolve this issue?
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event c... See more...
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event codes and reduce the transfer of log data over the WAN. Q:  Does a blacklist on inputs.conf for the HEC filter the events at the indexer, or does it stop those event from being transferred at the source? Q: If I install a Universal Forwarder, am I able to stop the blacklisted events from being sent across the WAN?
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14... See more...
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel  read returning -1 state 1 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel nrpacket: recv, Connection timed out, spid: 501, suid: 84
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is agg... See more...
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is aggregate status codes by the minute per URL for each status code. So sample output would look like: time backend-service Status code 200 Status code 201 status code 202 10:00 app1.com 10   2 10:01 app1.com   10   10:01 app2.com 10     Columns would be dynamic based on the available status codes in the timeframe I am searching. I found lot of questions on aggregating all 200's into 2xx or total counts by URL but not this. Appreciate any suggestions on how to do this. Thanks!