All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I want to write a search which gives me total event counts for each host as per the time range picker. Additionally, I want to had two more columns which give event counts for each host in l... See more...
Hi All, I want to write a search which gives me total event counts for each host as per the time range picker. Additionally, I want to had two more columns which give event counts for each host in last 7 days and last 24 hours.  My SPL is in the format: - index="xxx" field1="dummy_value" field2="dummy_value" |stats sparkline(sum(event_count)) AS sparkline, max(_time) AS _time, sum(event_count) AS "Total_Event_Count" BY field2, field3, field4 |table field2, sparkline, field3, field4 I tried using append command but it does not help me get proper results.  Thus, I need your help to build the SPL. Thank you
I have created a number of apps and push them out using the command line.  In the serverclass.conf I want to add a restart of the forwarders.  This is what I have so far: [default] restartSplunk... See more...
I have created a number of apps and push them out using the command line.  In the serverclass.conf I want to add a restart of the forwarders.  This is what I have so far: [default] restartSplunkd = true issueReload = true [serverClass:duke_test_app:app:duke_test_app] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:duke_test_app] whitelist.0 =xxxxxx Will this work?  
"user-info" index=user_interface_type sourcetype=*  | table _time, host, port, _raw | sendemail to="abc@splunk.com" sendresults=true I use above query to list out the details for the search "user... See more...
"user-info" index=user_interface_type sourcetype=*  | table _time, host, port, _raw | sendemail to="abc@splunk.com" sendresults=true I use above query to list out the details for the search "user-info" I want to use this string "user-info" and pass it on in the title of the e-mail as : Notification received for user-info How to do that ?  
we have a DSP k8s cluster , for creating a DSP connection , is there a way for me to setup a HTTP proxy for data destination, send outputs to a destination over a HTTP proxy? It seems there is no suc... See more...
we have a DSP k8s cluster , for creating a DSP connection , is there a way for me to setup a HTTP proxy for data destination, send outputs to a destination over a HTTP proxy? It seems there is no such options on the UI, maybe it can be archived by adding http proxy Env variables into the container which is in charge of sending outgoing traffics via editing K8s Deployment, but I am not sure which Deployment should I change, or maybe add those ENV variables into the k8s master nodes…
I need to write regular expression to extract few fields in this, but not able to figure this out. Can you please help me on the same. X-Response-Timestamp: 2022-08-24T07:27:26.150Z x-amzn-Remapped... See more...
I need to write regular expression to extract few fields in this, but not able to figure this out. Can you please help me on the same. X-Response-Timestamp: 2022-08-24T07:27:26.150Z x-amzn-Remapped-Connection: close ... 4 lines omitted ... X-Amzn-Trace-Id: Root=1-6305d2de-69ec840431ff21182b4a9f68 Content-Type: application/json {"code":"APS.MPI.2019","severity":"FATL","text":"Invalid Request","user_message":"Request id has already used."} Above is the whole log. I need to extract code,severity and message. I cant able t understand the format and fetch.
Is there a way to manually trigger modular inputs using the rest API. Most of the advice I've seen involves triggering mod inputs by turning them off then on, but my setup uses cron intervals instead... See more...
Is there a way to manually trigger modular inputs using the rest API. Most of the advice I've seen involves triggering mod inputs by turning them off then on, but my setup uses cron intervals instead of seconds so it doesn't execute on enablement.
Hello Splunk Team, I want to build Dashboard on over golden signal. Can anyone help me or anyone have any prebuild dashboard. So can build my dashboard accordingly.
Hello everyone,  I have been reading the documentation for timecharts however I am a bit confused on how I can modify the encoding for the timecharts to return the second column instead of the fir... See more...
Hello everyone,  I have been reading the documentation for timecharts however I am a bit confused on how I can modify the encoding for the timecharts to return the second column instead of the first  as shown in the image or here I want to use a single value time chart to monitor the hosts reporting to me per hour and track the number of increase / decrease in them  Many thanks for all your help
greetings, i use LDAP for Authentication combined with RSA Multifactor Authentication. Is it possible to configure a exception for local users? 
  Hi, how do I display my Status Indicator with dynamic colors and icons in a Trellis layout?   | eval status=case(status_id==0,"Idle", status_id>=1 AND status_id<=3,"Setup/PM", status_id==4,"Idle... See more...
  Hi, how do I display my Status Indicator with dynamic colors and icons in a Trellis layout?   | eval status=case(status_id==0,"Idle", status_id>=1 AND status_id<=3,"Setup/PM", status_id==4,"Idle", status_id>=5 AND status_id<=6,"Down", status_id>=7 AND status_id<=8,"Idle", status_id==9,"Running") | eval color=case(status_id=0,"#edd051", status_id>=1 AND status_id<=3,"#006d9c", status_id=4,"#edd051", status_id>=5 AND status_id<=6,"#ff0000", status_id>=7 AND status_id<=8,"#edd051", status_id=9,"#42dba0") | eval icon=case(status_id=0,"times-circle", status_id>=1 AND status_id<=3,"user", status_id=4,"times-circle", status_id>=5 AND status_id<=6,"warning", status_id>=7 AND status_id<=8,"times-circle", status_id=9,"check") | stats last(status) last(color) last(icon) BY internal_name   This only displays the status with no icons and the default grey color.   Thanks.  
I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I... See more...
I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I have not seen a drop in the Disk Size usage in the Data Model screen, do I need to Rebuild and Update the acceleration? If so in which order and are there any performance or other risks involved in doing so? Thanks, Regards,
Hi All i have an exchange onprem distribution list, lets say dl@mydomain.com i want to know how many emails are triggered to this DL in one year, experts please help me with the splunk query to get... See more...
Hi All i have an exchange onprem distribution list, lets say dl@mydomain.com i want to know how many emails are triggered to this DL in one year, experts please help me with the splunk query to get this information.
We had a problem with our Microsoft Azure plugin since July. The field appliedConditionalAccessPolicies: [ [ - ] ] missing the data. We had upgraded the plugin to the latest version but still facin... See more...
We had a problem with our Microsoft Azure plugin since July. The field appliedConditionalAccessPolicies: [ [ - ] ] missing the data. We had upgraded the plugin to the latest version but still facing the same issue. Anyone facing a similar problem and please let us know if there is a fix.
Hi Splunkers, Need help on translating this search query to splunk configuration via props/transform. To give some context, the letter field was extracted via csv. "letter" field value is dynam... See more...
Hi Splunkers, Need help on translating this search query to splunk configuration via props/transform. To give some context, the letter field was extracted via csv. "letter" field value is dynamic. It should have less/more value. And the value is in the html tag format. syntax: <p>"value"</p> What would be the best practices in this scenario? Should I go with the method of search time or via index time? Sample query: | makeresults | eval letter = "<p>A</p><p>B</p><p>C</p><p>D</p>" | eval letter = replace(letter,"<p>","") | eval letter = replace(letter,"</p>","__") | makemv delim="__" letter Expected output: letter A B C D
Hi Guru,  How do we exclude 0% process usage from Hostmetrics? We would like to capture those process have >0% usage only.. Appreciate if you can provide the sample.  hostmetrics: collection_i... See more...
Hi Guru,  How do we exclude 0% process usage from Hostmetrics? We would like to capture those process have >0% usage only.. Appreciate if you can provide the sample.  hostmetrics: collection_interval: 10s scrapers: # System processes metrics, disabled by default process:    (filter / exclude 0% process usage)
Hi peeps, Need help in extracting some fields; Sample logs: Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User... See more...
Hi peeps, Need help in extracting some fields; Sample logs: Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User: NT AUTHORITY\\SYSTEM (System user)\r\nComponent: Network Threat Protection\r\nResult description: Blocked\r\nName: Scan.Generic.PortScan.TCP\r\nObject: TCP from 101.11.10.01 at 101.11.10.01:25\r\nObject type: Network packet\r\nObject name: TCP from 101.11.10.01 at 101.11.10.01\r\nAdditional: 101.11.10.01\r\nDatabase release date: 23/8/2022 12:26:00 PM rt=1661304218000 cs9=Workstation cs9Label=GroupName dhost=082HALIM141 dst=101.11.10.01 cs2=KES cs2Label=ProductName cs3=11.0.0.0 cs3Label=ProductVersion cs10=Network Threat Protection cs10Label=TaskName cs1=Scan.Generic.PortScan.TCP cs1Label=AttackName cs6=TCP cs6Label=AttackedProtocol cs4=2887053442 cs4Label=AttackerIPv4 cs7=25 cs7Label=AttackedPort cs8=2887125841 cs8Label=AttackedIP   Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User: NT AUTHORITY\\SYSTEM (System user)\r\nComponent: Network Threat Protection\r\nResult description: Blocked\r\nName: Scan.Generic.PortScan.TCP\r\nObject: TCP from 101.11.10.01 at 101.11.10.01:42666\r\nObject type: Network packet\r\nObject name: TCP from 101.11.10.01 at 101.11.10.01:42666\r\nAdditional: 101.11.10.01\r\nDatabase release date: 23/8/2022 12:26:00 PM rt=1661304218000 cs9=Workstation cs9Label=GroupName dhost=082HALIM141 dst=101.11.10.01 cs2=KES cs2Label=ProductName cs3=11.0.0.0 cs3Label=ProductVersion cs10=Network Threat Protection cs10Label=TaskName cs1=Scan.Generic.PortScan.TCP cs1Label=AttackName cs6=TCP cs6Label=AttackedProtocol cs4=2887053442 cs4Label=AttackerIPv4 cs7=42666 cs7Label=AttackedPort cs8=2887125841 cs8Label=AttackedIP   I need help to extract the underline value for fields name TCP. Sample:  TCP=101.11.10.01 Please help. Thanks.
HI, So, I have two clustered environments. I want to copy KO from one site to another. They need to have pretty much the same alerts, dashboards, etc. Thanks
Currently seeing issues after performing a certificate renewal.   Errors seen in splunkd.log   08-24-2022 00:58:03.942 +0000 ERROR SSLCommon - Can't read key file /opt/splunk/etc/auth/splunkw... See more...
Currently seeing issues after performing a certificate renewal.   Errors seen in splunkd.log   08-24-2022 00:58:03.942 +0000 ERROR SSLCommon - Can't read key file /opt/splunk/etc/auth/splunkweb/private.key errno=185073780 error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch. 08-24-2022 00:58:03.942 +0000 ERROR HTTPServer - SSL context could not be created - error in cert or password is wrong 08-24-2022 00:58:03.942 +0000 ERROR HTTPServer - SSL will not be enabled   The configuration for web.conf was validated in  Validated config in $SPLUNK_HOME/var/run/splunk/merged/web.conf and $SPLUNK_HOME /etc/system/local/web.conf sslPassword = <HASHED_PASSWORD> serverCert = $SPLUNK_HOME/etc/auth/splunkweb/server.pem privKeyPath = $SPLUNK_HOME/etc/auth/splunkweb/private.key   I confirmed that the sslPassword is valid by decrypting the password using /opt/splunk/bin/splunk show-decrypted --value <HASHED_PASSWORD> openssl rsa -in /opt/splunk/etc/auth/splunkweb/private.key  -noout -text <decripted_HASHED_PASSWORD> The private key opens correctly The following commands were run to validate the integrity of certificates openssl x509 -noout -modulus -in /opt/splunk/etc/auth/splunkweb/cert.pem | openssl md5 openssl x509 -noout -modulus -in /opt/splunk/etc/auth/server.pem | openssl md5 openssl rsa -noout -modulus -in /opt/splunk/etc/auth/splunkweb/private.key | openssl md5   All Values are the same Host has been rebooted recently and selinux is disabled
Hello!!   I've deployed a Splunk_TA_Stream to a Suse Enterprise 12 in order to capture queries from Informix DB 12.10FC14AEE, but I can't see any query. I have all other data sucha as SSH, DNS Qu... See more...
Hello!!   I've deployed a Splunk_TA_Stream to a Suse Enterprise 12 in order to capture queries from Informix DB 12.10FC14AEE, but I can't see any query. I have all other data sucha as SSH, DNS Queries, etc.  So do I need another conf in order to be capable to read that data?   Thnks!!
I have a lookup file called ipaddress.csv.  The column title in the file is ipaddress.  I want to search my logs for all of these ip addresses.  I know I need to use inputlookup to get the addresses ... See more...
I have a lookup file called ipaddress.csv.  The column title in the file is ipaddress.  I want to search my logs for all of these ip addresses.  I know I need to use inputlookup to get the addresses from the file, but I can't figure out how to then feed them to a search.   Thanks in advance