All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to generate a time chart that shows time on x-axis, results on y-axis and hue (legend) showing the different analytes. So far this what I have generated which is not the format I am looking fo... See more...
I want to generate a time chart that shows time on x-axis, results on y-axis and hue (legend) showing the different analytes. So far this what I have generated which is not the format I am looking for. I have the search code below. I probably do not need fieldformat but was thinking I needed the correct datatype. I am used to python Jupyter notebooks and am quite new to Splunk. Any help would be very appreciated. For example, I am showing a scatter plot from python that I can generate that mirrors what I am looking for in Splunk Incorrect Splunk Scatter Plot Example of What I want to get to     |inputlookup $lookupToken$ |where _time <= $tokLatestTime$ |where _time >= $tokEarliestTime$ |search $lab_token$ |search $analyte_token$ |search $location_token$ |sort _time desc |replace "ND" WITH 0 IN Results |table _time, Results, Analyte |fieldformat _time=strftime(_time, "%Y-%m-%d")        
Greetings! I have been googling, pluralsighting, reading splunk docs and I am extremely new to splunk. I did search the community and didnt find something close enough to what I need. So I am askin... See more...
Greetings! I have been googling, pluralsighting, reading splunk docs and I am extremely new to splunk. I did search the community and didnt find something close enough to what I need. So I am asking if anyone here has an idea of how I can find newly created users and then check if there are also any events that would signify those users were added to one of two groups. So far what I have is not working I cant figure out how to take the result set from the first search and fire off a second search (like a foreach) or if i am even thinking about that right. I was thinking using the fields command would do it, I have also tried to use "return" -  index=wineventlog source="wineventlog:security" eventcode=4720 | fields user_principal_name | search index=wineventlog source="wineventlog:security" eventcode in (4732,4728) "group1" OR "group2" I don't get errors but i can break the first query up and it works, I am not sure on how to take that result and pass it to the second. Most examples feature lookups and if that is the best way awesome. I am looking for technique tips as well as search construction help. Thank you in advance!
Hello, I am new to splunk rex, need help for below to extract a value from string. rex "Error while calling database for id = (?<id>.*)" Example string: "Error while calling database for id =8748... See more...
Hello, I am new to splunk rex, need help for below to extract a value from string. rex "Error while calling database for id = (?<id>.*)" Example string: "Error while calling database for id =8748723874_1" Output should be 8748723874 Thanks.
I am working on creating a monitoring dashboard that will alert us when one of our customers databases stop sending event data that we need for reporting.  However, I am struggling to filter my resul... See more...
I am working on creating a monitoring dashboard that will alert us when one of our customers databases stop sending event data that we need for reporting.  However, I am struggling to filter my results down to those customers that are not sending data.   Here's my search: | inputlookup HealthcareMasterList.csv | search ITV=1 AND ITV_INSTALLED>1 AND MarinaVersion IN (15*,16*,17*,18*)  | table propertyId FullHospitalName MarinaVersion | append   [ search index=hceventmonitoring            [| inputlookup HealthcareMasterList.csv              | search ITV=1 AND ITV_INSTALLED>1 AND MarinaVersion IN (15*,16*,17*,18*)              | table propertyId              | format]   | dedup _raw   | stats dc(monitorEventName) as TotalEventTypes by eventDate propertyId   | eval {eventDate}=TotalEventTypes   | fields - eventDate TotalEventTypes   | stats values(*) as * by propertyId] | selfjoin keepsingle=t max=0 propertyId The first part of the search is establishing a list of which customers I should be receiving event data from so they show up on the results even if there is no event data in Splunk.  The second part is determining how my distinct event types a customer is sending each day.   Below is a screenshot of a portion of my results: What I need to have happen next is to filter down to any rows with NULL in any of the displayed dates.  I tried to use | where isnull(2023*) but then found out you can't have wildcards in field names.   If I filter down to the nulls before doing |eval {date}=TotalEventTypes then I don't have any dates to work with as that field is blank for those rows (since they aren't sending event data, I don't have any dates from the event data to display).   I've seen other posts that suggest using foreach but I struggle to see how I could use that here since my field names are changing each day and I need the actual date to display as the field name when I view this in the dashboard.  If I filter out the nulls first is there a way to dynamically create a field with the dates of the last 7 days and then I can add the |eval {date}=TotalEventTypes and then have those dates as field names? Any thoughts or suggestions are highly appreciated!  I've been racking my brain for almost two days trying to figure this out.  LOL.  
Hello, I am new to splunk rex, so need help for regex. In logs, i have extracted  string, however again i need to extract a value from string. Example string :  "Error exception for fetching data... See more...
Hello, I am new to splunk rex, so need help for regex. In logs, i have extracted  string, however again i need to extract a value from string. Example string :  "Error exception for fetching data =1234567890_1" Question: From  above string, how can i use rex to get value :1234567890 Request you to please help. Thanks.
Hi  Am trying to change the "Cloud Service Provider (CSP)"  to "CSP". The field name is "Registration Type" Thanks
When I try to run some actions developing playbooks, I get a notification the Execution was interrupted / cancelled by user. This happens even if I do not touch anything or cancel the playbook, Does ... See more...
When I try to run some actions developing playbooks, I get a notification the Execution was interrupted / cancelled by user. This happens even if I do not touch anything or cancel the playbook, Does anyone know why this happens and how to prevent it?
After upgrading to 9.0.4 from 8.2.x, Splunk Web loads with a blank page, just the Splunk logo. 
Hello, We have Splunk Enterprise in our PROD environment and we have about 18 Indexers, 13 Search Heads and 1 CM/Deployer/LicenseMaster and all of the servers are in Red Hat Enterprise Linux Server r... See more...
Hello, We have Splunk Enterprise in our PROD environment and we have about 18 Indexers, 13 Search Heads and 1 CM/Deployer/LicenseMaster and all of the servers are in Red Hat Enterprise Linux Server release 7.9 (Maipo), we are planning to add additional servers for Indexexs and Search Heads and the new hardware will be in RHEL8, Is it okay if we have RHEL7 and RHEL8 running on the same Splunk environment? please advice. Thanks, Dhana
Hi All, I am trying to drilldown from a trellis dashboard panel to another dashboard. The trellis dashboard panel is created using  the below query and used the "Single Value" visualization. ... | ... See more...
Hi All, I am trying to drilldown from a trellis dashboard panel to another dashboard. The trellis dashboard panel is created using  the below query and used the "Single Value" visualization. ... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region=* | search Environment=* | search host=* | search File_System=* | search Disk_Usage=* | stats count count(eval(Disk_Usage>=80)) as issue by host | stats count as Total_Servers count(eval(issue > 0)) as Affected_Servers The dashboard which is drilled down to, gives the details of the disk usage along with the servers and others. Below query is used to create the dashboard: ... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region=* | search Environment=* | search host=* | search File_System=* | search Disk_Usage=* | eval Server=if(Disk_Usage>=80,"Affected_Servers","Total_Servers") | search Server="$SVR$" | table Time_Stamp,Environment,host,File_System,Total,Used,Available,Disk_Usage | sort - Disk_Usage | rename Total as "Total in GB" Used as "Used in GB" Available as "Available in GB" Disk_Usage as "Disk_Usage in %" Now, while configuring the drilldown, I am using the parameter and token as "SVR" and "$trellis.value$" but the drilldown value is giving no results and I can see the token is not passed. Also please help to modify the drilled-down dashboard query such that when "Total_Servers" is clicked, it gives details of all disk usages and when "Affected_Servers" is clicked, it gives details of only the disk usages that are above 80. Please help to make changes to the drilled-down dashboard query to get the expected drilldown from the trellis panel.   Your kind inputs are highly appreciated..!! Thank You..!!
Hello to everyone. I need to distribute a *.csv file that was created by a certain script (not with the help of Splunk). The script runs every day and may update the file. How can I do it in the S... See more...
Hello to everyone. I need to distribute a *.csv file that was created by a certain script (not with the help of Splunk). The script runs every day and may update the file. How can I do it in the SHC? I tried to push this file with the help of the Deployer, but the main problem with this approach is that a lookup file is only created if it does not exist on the SHC members. If I push it once, I can't update it. I understand that I can develop an external script that will delete an old file on the SHC members and then push a new one with the help of the Deployer. But maybe an easier way exists to resolve my case?
In my Heavy Forwarder server I am seeing this message as below recently in the messages tab.  File Integrity checks found 114 files that did not match the system-provided manifest. Review the list o... See more...
In my Heavy Forwarder server I am seeing this message as below recently in the messages tab.  File Integrity checks found 114 files that did not match the system-provided manifest. Review the list of problems reported by the InstalledFileHashChecker in splunkd.log File Integrity Check View ; potentially restore files from installation media, change practices to avoid changing files, or work with support to identify the problem. Learn more. So how can we get it fixed.  
Whenever I have ran the command "splunk reload deploy-server" in my Deployment Master server I am getting this message as below: WARNING: Server Certificate Hostname Validation is disabled. Please ... See more...
Whenever I have ran the command "splunk reload deploy-server" in my Deployment Master server I am getting this message as below: WARNING: Server Certificate Hostname Validation is disabled. Please see server.c onf/[sslConfig]/cliVerifyServerName for details. So how can we get this fixed.  Kindly help to check and update.
Whenever when I restart or stop and start the Splunk Enterprise instance running in my HF or DM server i am getting this message as below but anyhow the splunk process is getting started as expected.... See more...
Whenever when I restart or stop and start the Splunk Enterprise instance running in my HF or DM server i am getting this message as below but anyhow the splunk process is getting started as expected. But how can we get rid of this message. Kindly let me know.   PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security
Our Splunk Heavy Forwarder and Deployment Master servers are running with version 9.0.0 and when we navigate to Apps and click Find More Apps i am getting the error as below:   Error connecting: ... See more...
Our Splunk Heavy Forwarder and Deployment Master servers are running with version 9.0.0 and when we navigate to Apps and click Find More Apps i am getting the error as below:   Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.. Your Splunk instance is specifying custom CAs to trust using sslRootCAPath configuration in server.conf's [sslConfig] stanza. Make sure the CAs in the appsCA.pem (located under $SPLUNK_HOME/etc/auth/appsCA.pem) are included in the CAs specified by sslRootCAPath. To do this, append appsCA.pem to the file specified by the sslRootCAPath parameter.   So how can we get it fixed and I can see the same error in all our HF servers and DM server as well.   So kindly help to check and update on the same.
I have this search index="firewall" dest_ip=172.99.99.99 dest_port=* | stats count by src_ip,dest_port,action,src_user Instead of showing all src_ip's I want to group on the subnet part, that is us... See more...
I have this search index="firewall" dest_ip=172.99.99.99 dest_port=* | stats count by src_ip,dest_port,action,src_user Instead of showing all src_ip's I want to group on the subnet part, that is using the dest_ip as an example, the three first (not being a network guy I might use the wrong wording ) in the stats  172.99.99  My guess is rex, but guessing that there might be some other easier functions in Splunk for doing this?
Hi, Are there any available applications to address the issue of incorrect parsing of secret server logs in Splunk cloud? Thnks
Hello all, I am trying to blacklist an event that is tied to a specific sAMAccountName which is sAMAccountName="Alertz - ProductFeedback" .  The only way I can think to achieve this is maybe with a ... See more...
Hello all, I am trying to blacklist an event that is tied to a specific sAMAccountName which is sAMAccountName="Alertz - ProductFeedback" .  The only way I can think to achieve this is maybe with a blacklist regex statement but I am not sure and not very good with regex. Below is a sample event. Please let me know if there are any questions.   08/16/2023 09:34:07.541 dcName=RNBSAD1.rightnetworks.com admonEventType=Update Names: objectCategory=CN=Group,CN=Schema,CN=Configuration,DC=rightnetworks,DC=com name=Alertz - ProductFeedback distinguishedName=CN=Alertz - ProductFeedback,OU=Expired Alert Groups,OU=Desk Alerts,OU=Security Groups,DC=rightnetworks,DC=com cn=Alertz - ProductFeedback Object Details: sAMAccountType=268435456 sAMAccountName=Alertz - ProductFeedback objectSid=S-1-5-21-2605281412-2030159296-1019850961-856824 objectGUID=1e0bcfbf-dc8b-43e9-855a-7004ce3d6b3b whenChanged=09:33.53 AM, Wed 08/16/2023 whenCreated=09:31.41 AM, Tue 08/01/2023 objectClass=top|group Event Details: uSNChanged=820790490 uSNCreated=813674539 instanceType=4 Additional Details: dSCorePropagationData=16010101000000.0Z groupType=-2147483646
I would like to add a label for the upper/lower 95. I was wondering how I could do that. Id like to have it the same color as the line as well, similar to --Upper|Lower95.
We have successfully configured the Microsoft Teams app in Splunk SOAR, and we are able to send messages to a Teams channel, but the messages are coming from the account of the Azure Global Admin who... See more...
We have successfully configured the Microsoft Teams app in Splunk SOAR, and we are able to send messages to a Teams channel, but the messages are coming from the account of the Azure Global Admin who created the App Registration and granted the permissions. Within the Asset Configuration in Splunk Soar, we have tried to use different users under the "Select a user on behalf of which automated actions can be executed (e.g. test connectivity, ingestion)" setting without success. How do we configure the app to send from a different user?