All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, I am managing a splunk architecture with an enterprise license. Sometime during this year i will need to do an architecture migration from my current architecture to a new one eliminating... See more...
Hello all, I am managing a splunk architecture with an enterprise license. Sometime during this year i will need to do an architecture migration from my current architecture to a new one eliminating the old one. Will i be able to just copy the license file in the license manager of the new architecture? Is there some contractual problem with this procedure?  thanks a lot. 
HI, I am trying to create a dashboard which is similar to incident review dashboard, I am looking for users to be able to select time and detection name to display notables per time/rule but also ... See more...
HI, I am trying to create a dashboard which is similar to incident review dashboard, I am looking for users to be able to select time and detection name to display notables per time/rule but also show only fields extracted on CR on | fields command.  I was able to accomplish part of it by merging  index notable with rest api command and then extracting with regex information after | fields in the spl. Which gave me a list of fields available such as  title: Brute Force field_values: _time, created_at, ip, md5, attempts... I need that list of fields within field_values to be shown as actual fields whenever the user selects and specific detection. This will need to be dynamic because there are different fields which will change per detection.   Search:  index=notable | rename search_name as title | search title="*" | join type=left title [ | rest /servicesNS/-/-/saved/searches splunk_server=local | eval disabled=if(disabled=1,"true","false") | search disabled=false actions IN ("*notable*") | rex field=search "\|\s*(fields|table)\s+(?<field_values>.*)" | fields title field_values] I appreciate any help since I have been scratching my head for a couple of weeks now.  Thanks in advance. 
Hi all, im looking to create a dashboard to capture various info on or proxy data. I have a few simple queries index=siem-proxy | top limit=5 cs_method and my other query index=siem-proxy | t... See more...
Hi all, im looking to create a dashboard to capture various info on or proxy data. I have a few simple queries index=siem-proxy | top limit=5 cs_method and my other query index=siem-proxy | top limit=8 dest_port this gets the requests methods such as POST, GET etc. I want to add this to a dashboard but looking to streamline the query first, i tried using tstats but was getting nothing back some I think im getting the syntax wrong. Without streamlining the queries are taking a very long time to run as i have millions of events. Is there a way to put this into a tstats query that I can use as  visualization?   thank you 
Hi Splunk Experts, I am trying to make a health check dashboard for our application.First thing in the list is to monitor status of Linux process. What I am planning is to execute a shell script in... See more...
Hi Splunk Experts, I am trying to make a health check dashboard for our application.First thing in the list is to monitor status of Linux process. What I am planning is to execute a shell script in the server and write output of process status as 0 and 1. Where 0 is running and 1 is down. Then this is written to a log and this logs is being pushed to Splunk. Now my requirement is to create a dashboard which display Service Name and then status as Green or Red. Just wanted to know whether this is right approach or is there any alternative way to achieve the same more efficiently ? 
Good afternoon, I have a very strange problem. I have a log with these 2 events: 01/02/2024 13:06:16 - SOLISP1 IP: 10.229.87.80 USER-AGENT: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko... See more...
Good afternoon, I have a very strange problem. I have a log with these 2 events: 01/02/2024 13:06:16 - SOLISP1 IP: 10.229.87.80 USER-AGENT: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0 01/02/2024 13:00:54 - GGCARO3 IP: 10.229.87.80 USER-AGENT: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0 The date format in the event is dd/mm/yyyy Well, splunk indexes one of them in January and another in February. We have tried editing the props file as follows: [default] TIME_PREFIX = ^ TIME_FORMAT = %d/%m/%Y %H:%M:%S Anyone know what might be happening?
Hi For a table how can I get back to 8.0 colors the 9.0 are very bright.  
Hi According to the Splunk Docs from version 9.1: "the installer creates a virtual account as a "least privileged" user called splunkfwd" After an upgrade to version 9.1.2 I am having trouble with... See more...
Hi According to the Splunk Docs from version 9.1: "the installer creates a virtual account as a "least privileged" user called splunkfwd" After an upgrade to version 9.1.2 I am having trouble with the UF autostarting. Looking at Windows Event Logs I can see the following error: Which suggests the account is actually "SplunkForwarder" not "splunkfwd" When I check the Windows Service Log On user I also see the user "SplunkForwarder":   And "SplunkForwarder" is also the only Splunk related user I can see when I run the following command to list all users: get-service | foreach {Write-Host NT Service\$($_.Name)}   Can someone confirm that the Doc is incorrect and the virtual account created is in fact SplunkForwarder? Or is "splunkfwd" created somewhere else?   Thanks  
HF1 is with sender Add-on and configured outputs.conf with udp and input ip interface (default configurations) - Not working  We have checked the connectivity with command "nc -vzu host port " the u... See more...
HF1 is with sender Add-on and configured outputs.conf with udp and input ip interface (default configurations) - Not working  We have checked the connectivity with command "nc -vzu host port " the udp port and its showing open any ideas !!!
I have a challenge:  When somebody are doing changes to our AD, it is done using a cyberark account. In order to finde the person behind the cyberark account, i need to go back and find the event we... See more...
I have a challenge:  When somebody are doing changes to our AD, it is done using a cyberark account. In order to finde the person behind the cyberark account, i need to go back and find the event were a person checks out an account.  So i have and AD change at 01.27 with user=pam-serveradmin01   and from cyberark at 01.05 account=pam-serveradmin and user=clt How would you build this query 
Hi Folks, We have thousands of universal forwarders that are currently running on old version (7.0.2). We are planning to upgrade universal forwarders to most recent version but before we do that we... See more...
Hi Folks, We have thousands of universal forwarders that are currently running on old version (7.0.2). We are planning to upgrade universal forwarders to most recent version but before we do that we would like to reduce the overall footprint of universal forwarders by uninstalling them from the servers that are no longer sending logs.  Logs for few applications and infrastructure are migrated to Azure so they are no longer sending it to splunk. Need to find a list of such servers so i can uninstall them before i do mass upgrade. Is there a query that can give me the list of hostname along with timestamp of last log that it sent. Thanks in advance
Having issues accessing my dashboard that I'm seeing usung my coursera course link...  
Hi @ITWhisperer , I have two datasets from two search queries. I need to fetch the common as well as distinct values from both the datasets in the final result.  Something like this: Field1 Fi... See more...
Hi @ITWhisperer , I have two datasets from two search queries. I need to fetch the common as well as distinct values from both the datasets in the final result.  Something like this: Field1 Field2 Result 1 2 1 3 4 2 5 6 3 7 8 4 9 10 5 10   6     7     8     9     10   Can you please help with the query?
Hi, Were trying to connect ePO via syslog to splunk, weve followed the steps provided in the ePO add-on documentation and were able to capture logs from ePO. However the logs are encrypted, raisin... See more...
Hi, Were trying to connect ePO via syslog to splunk, weve followed the steps provided in the ePO add-on documentation and were able to capture logs from ePO. However the logs are encrypted, raising this concern to our ePO support he suggested 2 things: 1. Enable the supported TLS/cipher suites by ePO on the splunk side 2. Add the splunk as a registered server and make sure test Syslog is successful From the Splunk documentation we followed, were always getting failed test syslog and scouring around different docs and community posts on other SIEM brands, most seem to have had success (on connecting to ePO) once they have verified the supported cipher suite of the ePO exists and is enforced on their collector. Going from this, is there a way to check/verify which cipher suites are used by Splunk. Ive seen the document regarding Splunk TLS, and it seems that the supported cipher suites for ePO are included in the default however is there a way to verify this?  Our setup is as follows: - Configured HF on a Win server - Configured inputs.conf as below:  
Can we monitor the whatsapp chat bot used for mobile banking to know its performance using mobile real user or synthetic monitoring ?
I have a multivalue field and am hoping I can get help to replace all the non-alphanumeric characters within a specific place within each value of the mvfield.  I am taking this multivalue field and ... See more...
I have a multivalue field and am hoping I can get help to replace all the non-alphanumeric characters within a specific place within each value of the mvfield.  I am taking this multivalue field and creating a new field but my regex is simply ignoring entries whenever there is a special character.  I have to ignore these characters, so I'm trying to find way to remove those characters before it reaches my eval statement to create the new field. I know the problem is the capture group around the "name" value as it only allows \w and \s. name\x22\x3a(?:\s+)?\x22([\w\s]+)\x22. But I'm not sure how to fix it.  I've tried extracting the name field first, using sed to remove the characters, but then don't know how to "re-inject" it back into the mv-field or build my new field but reference the now clean name field. Any ideas??? Sample Data {"bundle": "com.servicenow.blackberry.ful", "name": "ServiceNow Agent\u00ae - BlackBerry", "name_version": "ServiceNow Agent\u00ae - BlackBerry-17.2.0", "sw_uid": "faa5c810a2bd2d5da418d72hd", "version": "17.2.0", "version_raw": "0000000170000000200000000"} {"bundle": "com.penlink.pen", "name": "PenPoint", "name_version": "PenPoint-1.0.1", "sw_uid": "cba7d3601855e050d8new0f34", "version": "1.0.1", "version_raw": "0000000010000000000000001"}   SPL to create new field | eval new = if(sourcetype=="custom:data", mvmap(old_field,replace(old_field,"\x7b.*?\x22bundle\x22\x3a\s+\x22((?:net|jp|uk|fr|se|org|com|gov)\x2e(\w+)\x2e.*?)\x22.*?name\x22\x3a(?:\s+)?\x22([\w\s]+)\x22.*?\x22sw_uid\x22\x3a(?:\s+)?\x22(?:([a-fA-F0-9]+)|[\w_:]+)\x22.*?\x22version\x22\x3a(?:\s+)?\x22(.*?)\x22.*$","cpe:2.3:a:\2:\3:\5:*:*:*:*:*:*:* - \1 - \4")),new)   This creates one good and one bad entry {"bundle": "com.servicenow.blackberry.ful", "name": "ServiceNow Agent\u00ae - BlackBerry", "name_version": "ServiceNow Agent\u00ae - BlackBerry-17.2.0", "sw_uid": "faa5c810a2bd2d5da418d72hd", "version": "17.2.0", "version_raw": "0000000170000000200000000"} cpe:2.3:a:penlink:PenPoint:1.0.1:*:*:*:*:*:*:* - com.penlink.penpoint - cba7d3601855e050d8new0f34  
We have a file that is rotated at midnight every night.  The file is renamed and zipped up.  Sometimes after the log rotation Splunk does not ingest the new file. There are no errors in the Splunkd... See more...
We have a file that is rotated at midnight every night.  The file is renamed and zipped up.  Sometimes after the log rotation Splunk does not ingest the new file. There are no errors in the Splunkd log relating to crc or anything along those lines. A restart of Splunk resolves the issue however we would like to find a more permanent solution. We are on UF version, 9.0.4.   Appreciate any suggestions you may have
Hello I need a proxy connection when I use TA-tenable-easm on splunk. Is there a way or a guide to set up proxy on TA-tenable-easm?  
Hello, How to display date range from the time range dropdown selector in the Dashboard Studio? Thank you for your help I am currently using Visualization Type " Table" and create data configurati... See more...
Hello, How to display date range from the time range dropdown selector in the Dashboard Studio? Thank you for your help I am currently using Visualization Type " Table" and create data configuration with the following search: info_min_time & info_max_time gave me duplicate data for each row and I had to use dedup Is this a proper way to do it? Is there a way to use the time token ($timetoken.earliest$ or $timetoken.latest$) from the time range dropdown selector in the search from data configuration (not in XML) index=test | addinfo | eval info_min_time="From: ". strftime(info_min_time,"%b %d %Y %H:%M:%S") | eval info_max_time="To: ". strftime(info_max_time,"%b %d %Y %H:%M:%S") | dedup info_min_time, info_max_time | table info_min_time, info_max_time  
I am new to splunk and I have inherited a system that forwards log in CEF CSV format.  These logs are then tar'd up and sent to the distant end (which does happen successfully).  The issue I have is ... See more...
I am new to splunk and I have inherited a system that forwards log in CEF CSV format.  These logs are then tar'd up and sent to the distant end (which does happen successfully).  The issue I have is when the splunk server picks up the CEF CSV it has epoch time as the first entry of every log in the CEF CSV file.  This makes the next hop/stop aggregator I send to unhappy.   original host (forwarder) -> splunk host -> splunk host -> master aggregator (arcsight type server) example: 1706735561, "blah blah blah" the file cef.csv says it's doing "_time","_raw" When I look at what I think is the setup for time (etc/datetime.xml), _time does not have anything about epoch or %s in there. How do I configure the CEF CSV to omit the epoch time? As I mentioned earlier, I am totally new to splunk.  Any help would be fantastic.