All Topics

Top

All Topics

Hi all, I am trying to implement Splunk in a particular use case.  Use case I am trying to implement: HF (configured proxy) > transfer data via internet > indexer Kind share your knowledge... See more...
Hi all, I am trying to implement Splunk in a particular use case.  Use case I am trying to implement: HF (configured proxy) > transfer data via internet > indexer Kind share your knowledge. Further help would be highly appreciated. thanks
I am wanting to query DB information from DB Connect. But the problem is that each time the query gets information of the entire query table. This takes up a lot of storage space Is there a way to ... See more...
I am wanting to query DB information from DB Connect. But the problem is that each time the query gets information of the entire query table. This takes up a lot of storage space Is there a way to get only new logs without duplicates? Every day the amount of new information is different, can't limit the number of rows you want to get? Thanks
Folks, Does anyone know when we configure advanced secution in Source Type (Settings>SourceTypes and Edit), where is original configuration file where the advanced view shows? I choose "linux_sec... See more...
Folks, Does anyone know when we configure advanced secution in Source Type (Settings>SourceTypes and Edit), where is original configuration file where the advanced view shows? I choose "linux_secure" source type and check advanced tab. I saw "src" and "src_ip" in search result for my data that used this source type. However I could't find any settings for these fields. So I though there were missing configurations in this tab and I wanted to know source configuration files for each source types. Please someone share your knowledge.
I want to generate a time chart that shows time on x-axis, results on y-axis and hue (legend) showing the different analytes. So far this what I have generated which is not the format I am looking fo... See more...
I want to generate a time chart that shows time on x-axis, results on y-axis and hue (legend) showing the different analytes. So far this what I have generated which is not the format I am looking for. I have the search code below. I probably do not need fieldformat but was thinking I needed the correct datatype. I am used to python Jupyter notebooks and am quite new to Splunk. Any help would be very appreciated. For example, I am showing a scatter plot from python that I can generate that mirrors what I am looking for in Splunk Incorrect Splunk Scatter Plot Example of What I want to get to     |inputlookup $lookupToken$ |where _time <= $tokLatestTime$ |where _time >= $tokEarliestTime$ |search $lab_token$ |search $analyte_token$ |search $location_token$ |sort _time desc |replace "ND" WITH 0 IN Results |table _time, Results, Analyte |fieldformat _time=strftime(_time, "%Y-%m-%d")        
Greetings! I have been googling, pluralsighting, reading splunk docs and I am extremely new to splunk. I did search the community and didnt find something close enough to what I need. So I am askin... See more...
Greetings! I have been googling, pluralsighting, reading splunk docs and I am extremely new to splunk. I did search the community and didnt find something close enough to what I need. So I am asking if anyone here has an idea of how I can find newly created users and then check if there are also any events that would signify those users were added to one of two groups. So far what I have is not working I cant figure out how to take the result set from the first search and fire off a second search (like a foreach) or if i am even thinking about that right. I was thinking using the fields command would do it, I have also tried to use "return" -  index=wineventlog source="wineventlog:security" eventcode=4720 | fields user_principal_name | search index=wineventlog source="wineventlog:security" eventcode in (4732,4728) "group1" OR "group2" I don't get errors but i can break the first query up and it works, I am not sure on how to take that result and pass it to the second. Most examples feature lookups and if that is the best way awesome. I am looking for technique tips as well as search construction help. Thank you in advance!
Hello, I am new to splunk rex, need help for below to extract a value from string. rex "Error while calling database for id = (?<id>.*)" Example string: "Error while calling database for id =8748... See more...
Hello, I am new to splunk rex, need help for below to extract a value from string. rex "Error while calling database for id = (?<id>.*)" Example string: "Error while calling database for id =8748723874_1" Output should be 8748723874 Thanks.
I am working on creating a monitoring dashboard that will alert us when one of our customers databases stop sending event data that we need for reporting.  However, I am struggling to filter my resul... See more...
I am working on creating a monitoring dashboard that will alert us when one of our customers databases stop sending event data that we need for reporting.  However, I am struggling to filter my results down to those customers that are not sending data.   Here's my search: | inputlookup HealthcareMasterList.csv | search ITV=1 AND ITV_INSTALLED>1 AND MarinaVersion IN (15*,16*,17*,18*)  | table propertyId FullHospitalName MarinaVersion | append   [ search index=hceventmonitoring            [| inputlookup HealthcareMasterList.csv              | search ITV=1 AND ITV_INSTALLED>1 AND MarinaVersion IN (15*,16*,17*,18*)              | table propertyId              | format]   | dedup _raw   | stats dc(monitorEventName) as TotalEventTypes by eventDate propertyId   | eval {eventDate}=TotalEventTypes   | fields - eventDate TotalEventTypes   | stats values(*) as * by propertyId] | selfjoin keepsingle=t max=0 propertyId The first part of the search is establishing a list of which customers I should be receiving event data from so they show up on the results even if there is no event data in Splunk.  The second part is determining how my distinct event types a customer is sending each day.   Below is a screenshot of a portion of my results: What I need to have happen next is to filter down to any rows with NULL in any of the displayed dates.  I tried to use | where isnull(2023*) but then found out you can't have wildcards in field names.   If I filter down to the nulls before doing |eval {date}=TotalEventTypes then I don't have any dates to work with as that field is blank for those rows (since they aren't sending event data, I don't have any dates from the event data to display).   I've seen other posts that suggest using foreach but I struggle to see how I could use that here since my field names are changing each day and I need the actual date to display as the field name when I view this in the dashboard.  If I filter out the nulls first is there a way to dynamically create a field with the dates of the last 7 days and then I can add the |eval {date}=TotalEventTypes and then have those dates as field names? Any thoughts or suggestions are highly appreciated!  I've been racking my brain for almost two days trying to figure this out.  LOL.  
Hello, I am new to splunk rex, so need help for regex. In logs, i have extracted  string, however again i need to extract a value from string. Example string :  "Error exception for fetching data... See more...
Hello, I am new to splunk rex, so need help for regex. In logs, i have extracted  string, however again i need to extract a value from string. Example string :  "Error exception for fetching data =1234567890_1" Question: From  above string, how can i use rex to get value :1234567890 Request you to please help. Thanks.
Hi  Am trying to change the "Cloud Service Provider (CSP)"  to "CSP". The field name is "Registration Type" Thanks
Hello Resilience Questers! Hope you are enjoying the challenge so far! The second leaderboard update for The Great Resilience Quest is out. It's thrilling to witness the dedication and resilience yo... See more...
Hello Resilience Questers! Hope you are enjoying the challenge so far! The second leaderboard update for The Great Resilience Quest is out. It's thrilling to witness the dedication and resilience you've all showcased.  Check out the Leaderboard  Kudos to our current front-runners! As the competition heats up, strategize your moves, finish the first two chapters, and secure your spot on the Leaderboard. By doing so, you will be eligible for the Champion’s Tribute reward— a $150 Splunk Store Gift Card! If you are new to the quest, it's not too late to dive in! Join the game and bolster your knowledge on achieving digital resilience with Splunk. The show must go on! Good luck everyone. Best regards, Splunk Customer Success
Hello Resilience Questers! The adventure has truly begun, and we are excited to unveil the first official leaderboard for "The Great Resilience Quest"! It is been an incredible journey so far, and w... See more...
Hello Resilience Questers! The adventure has truly begun, and we are excited to unveil the first official leaderboard for "The Great Resilience Quest"! It is been an incredible journey so far, and we've seen some fantastic efforts from all our participants. For those new to the quest, it's not too late to join! "The Great Resilience Quest" is our interactive game designed to fortify your understanding of achieving digital resilience with Splunk, through engaging real-world use cases. Join us now and embark on this epic journey. Learn more and sign up HERE. Check out the Leaderboard   Congratulations to our current leaders!  How We Feature the Leaderboard: The leaderboard is determined by a combination of factors: the number of quests players have finished, the chapters they have completed, and the time taken to complete them. It's a multi-faceted approach that recognizes the true champions of resilience. What's next:  Those players who have been featured on the leaderboard are now placed in the pool for the special Champion’s Tribute rewards. It is our way of honoring your efforts and encouraging you to continue on this exciting journey toward digital resilience mastery. Please stay tuned for the next leaderboard update in two weeks!  Thank you all for participating, and may the best questers conquer! Best regards, Splunk Customer Success
When I try to run some actions developing playbooks, I get a notification the Execution was interrupted / cancelled by user. This happens even if I do not touch anything or cancel the playbook, Does ... See more...
When I try to run some actions developing playbooks, I get a notification the Execution was interrupted / cancelled by user. This happens even if I do not touch anything or cancel the playbook, Does anyone know why this happens and how to prevent it?
After upgrading to 9.0.4 from 8.2.x, Splunk Web loads with a blank page, just the Splunk logo. 
Hello, We have Splunk Enterprise in our PROD environment and we have about 18 Indexers, 13 Search Heads and 1 CM/Deployer/LicenseMaster and all of the servers are in Red Hat Enterprise Linux Server r... See more...
Hello, We have Splunk Enterprise in our PROD environment and we have about 18 Indexers, 13 Search Heads and 1 CM/Deployer/LicenseMaster and all of the servers are in Red Hat Enterprise Linux Server release 7.9 (Maipo), we are planning to add additional servers for Indexexs and Search Heads and the new hardware will be in RHEL8, Is it okay if we have RHEL7 and RHEL8 running on the same Splunk environment? please advice. Thanks, Dhana
Hi All, I am trying to drilldown from a trellis dashboard panel to another dashboard. The trellis dashboard panel is created using  the below query and used the "Single Value" visualization. ... | ... See more...
Hi All, I am trying to drilldown from a trellis dashboard panel to another dashboard. The trellis dashboard panel is created using  the below query and used the "Single Value" visualization. ... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region=* | search Environment=* | search host=* | search File_System=* | search Disk_Usage=* | stats count count(eval(Disk_Usage>=80)) as issue by host | stats count as Total_Servers count(eval(issue > 0)) as Affected_Servers The dashboard which is drilled down to, gives the details of the disk usage along with the servers and others. Below query is used to create the dashboard: ... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region=* | search Environment=* | search host=* | search File_System=* | search Disk_Usage=* | eval Server=if(Disk_Usage>=80,"Affected_Servers","Total_Servers") | search Server="$SVR$" | table Time_Stamp,Environment,host,File_System,Total,Used,Available,Disk_Usage | sort - Disk_Usage | rename Total as "Total in GB" Used as "Used in GB" Available as "Available in GB" Disk_Usage as "Disk_Usage in %" Now, while configuring the drilldown, I am using the parameter and token as "SVR" and "$trellis.value$" but the drilldown value is giving no results and I can see the token is not passed. Also please help to modify the drilled-down dashboard query such that when "Total_Servers" is clicked, it gives details of all disk usages and when "Affected_Servers" is clicked, it gives details of only the disk usages that are above 80. Please help to make changes to the drilled-down dashboard query to get the expected drilldown from the trellis panel.   Your kind inputs are highly appreciated..!! Thank You..!!
Hello to everyone. I need to distribute a *.csv file that was created by a certain script (not with the help of Splunk). The script runs every day and may update the file. How can I do it in the S... See more...
Hello to everyone. I need to distribute a *.csv file that was created by a certain script (not with the help of Splunk). The script runs every day and may update the file. How can I do it in the SHC? I tried to push this file with the help of the Deployer, but the main problem with this approach is that a lookup file is only created if it does not exist on the SHC members. If I push it once, I can't update it. I understand that I can develop an external script that will delete an old file on the SHC members and then push a new one with the help of the Deployer. But maybe an easier way exists to resolve my case?
In my Heavy Forwarder server I am seeing this message as below recently in the messages tab.  File Integrity checks found 114 files that did not match the system-provided manifest. Review the list o... See more...
In my Heavy Forwarder server I am seeing this message as below recently in the messages tab.  File Integrity checks found 114 files that did not match the system-provided manifest. Review the list of problems reported by the InstalledFileHashChecker in splunkd.log File Integrity Check View ; potentially restore files from installation media, change practices to avoid changing files, or work with support to identify the problem. Learn more. So how can we get it fixed.  
Whenever I have ran the command "splunk reload deploy-server" in my Deployment Master server I am getting this message as below: WARNING: Server Certificate Hostname Validation is disabled. Please ... See more...
Whenever I have ran the command "splunk reload deploy-server" in my Deployment Master server I am getting this message as below: WARNING: Server Certificate Hostname Validation is disabled. Please see server.c onf/[sslConfig]/cliVerifyServerName for details. So how can we get this fixed.  Kindly help to check and update.
Whenever when I restart or stop and start the Splunk Enterprise instance running in my HF or DM server i am getting this message as below but anyhow the splunk process is getting started as expected.... See more...
Whenever when I restart or stop and start the Splunk Enterprise instance running in my HF or DM server i am getting this message as below but anyhow the splunk process is getting started as expected. But how can we get rid of this message. Kindly let me know.   PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security
Our Splunk Heavy Forwarder and Deployment Master servers are running with version 9.0.0 and when we navigate to Apps and click Find More Apps i am getting the error as below:   Error connecting: ... See more...
Our Splunk Heavy Forwarder and Deployment Master servers are running with version 9.0.0 and when we navigate to Apps and click Find More Apps i am getting the error as below:   Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.. Your Splunk instance is specifying custom CAs to trust using sslRootCAPath configuration in server.conf's [sslConfig] stanza. Make sure the CAs in the appsCA.pem (located under $SPLUNK_HOME/etc/auth/appsCA.pem) are included in the CAs specified by sslRootCAPath. To do this, append appsCA.pem to the file specified by the sslRootCAPath parameter.   So how can we get it fixed and I can see the same error in all our HF servers and DM server as well.   So kindly help to check and update on the same.