All Topics

Top

All Topics

Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I di... See more...
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I did the same in the opposite direction. 2. Is it possible to configure TLS/SSL certificates on the "universal forwarder" and make a connection with the indexer? Will it work? 3. Can we index data using two different ports? For example 9997 - without TLS and 9998 - with TLS.
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there.  So the ask is they want to exclude these IP addresses which con... See more...
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there.  So the ask is they want to exclude these IP addresses which contains threat messages. IPs are dynamic (different IPs daily) and threat messages also dynamic (different). Normally to exclude this we need to give NOT (IP) NOT (IP)..... But here there are 100s of IPs and it will be big query. What can be done in this case? My thoughts.. Can I create a lookup table and user manually update that on daily basis and to exclude the IP addresses which are present in this lookup? Like just NOT (lookup table name)  If it is good please help me with the workaround and query to be followed?  Thanks in advance.
Register here. This thread is for the Community Office Hours session on AI Assistant in Observability Cloud on Tues, April 15, 2025 at 1pm PT / 4pm ET.    Ask the experts at Community Office Hours!... See more...
Register here. This thread is for the Community Office Hours session on AI Assistant in Observability Cloud on Tues, April 15, 2025 at 1pm PT / 4pm ET.    Ask the experts at Community Office Hours! An ongoing series where technical Splunk experts answer questions and provide how-to guidance on various Splunk product and use case topics.   What can I ask in this AMA? How can I use the AI Assistant to: Speed up investigations w/ faster root cause analysis and guided troubleshooting? Easily uncover key insights and make more informed decisions? Lower the learning curve for you and your team with Splunk Observability Cloud? Anything else you’d like to learn!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Register here. This thread is for the Community Office Hours session on Debugging Microservices with Splunk Observability Cloud on Tues, March 18, 2025 at 1pm PT / 4pm ET.    Ask the experts at Com... See more...
Register here. This thread is for the Community Office Hours session on Debugging Microservices with Splunk Observability Cloud on Tues, March 18, 2025 at 1pm PT / 4pm ET.    Ask the experts at Community Office Hours! An ongoing series where technical Splunk experts answer questions and provide how-to guidance on various Splunk product and use case topics.   What can I ask in this AMA? How do I detect issues with microservices and address them quickly? How do features like Service Centric Views, Tag Spotlight, and Trace Analyzer accelerate the troubleshooting process? How do Observability Cloud and Splunk Cloud/Enterprise work together to enhance troubleshooting capabilities? Anything else you'd like to learn about!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).  Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants. Looking forward to connecting!
The user has been removed from Splunk, and I am unable to locate any orphaned searches, reports, or alerts that were assigned to him
Hi,  We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the ne... See more...
Hi,  We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the new SH cluster sourcetype=dataA index=deptA | where critC > 25   On the old search head, this query runs fine and we see the results as expected. But on the SH cluster, this doesn't yield anything.  I have run the "sourcetype=dataA index=deptA" search query by itself, and they both see the same events. I am not sure why the search with (| where citC > 25) on the standalone SH would work and the cluster would not. Any help would be appreciated. Thank you  
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start T... See more...
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start Time End Time Duration Status Sla Status Snapshot Status Object Name Source Name Group Name Policy Name Object Type Backup Type System Name Logical Size Bytes Data Read Bytes Data Written Bytes Organization Name This would make it much easier for us to create the necessary reports in Splunk. Thank you very much
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test Event... See more...
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts* | strcat "Event ID: ", EventID " (" signature ")" timestampType | strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity | strcat EventDescription ": " TargetFilename " by " User details | eval attck = "N/A" | table Computer , UtcTime, timestampType, activity, Channel, attck, process_name I want to have a total sum of counts per same host and process_name with all activity (or target file names listed under). For e.g Computer | UTC | timestamp | activity | process_name | count | 1 | File list | same - repeats | missing value 2 | File list | same - repeats | missing value  
我想配置我制作的仪表板以显示在这里。
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise ... See more...
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise we have Y, Z soon application. We do in the same manner. But now the requirement is this X,Y,Z application come under 'A' applications and they want all 'A' team members (probably X,Y,Z combined) to view X,Y,Z applications. How we can achieve this? Can't create single index for all X,Y, and Z application because the logs should not be mixed.
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my auth... See more...
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my authorized credentials and found the below error in Splunkd.log   01-21-2025 05:18:05.218 -0500 ERROR ExecProcessor [3275615 ExecProcessor] - message from "/apps/splunk/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator
I have such a search and it works fine but not in Dashboard!         index=unis | search *sarch* | eval name = coalesce(C_Name, PersonName) | eval "DoorName"=if(sourcetype=="ARX:db", $Door$,$Doo... See more...
I have such a search and it works fine but not in Dashboard!         index=unis | search *sarch* | eval name = coalesce(C_Name, PersonName) | eval "DoorName"=if(sourcetype=="ARX:db", $Door$,$DoorName$)       when I use this is in a dashboard it looks for Door and DoorName as tokens while they are values of those fields what should I do to make it work in dashboard studio error I get : Set token value to render visualization $Door$ $DoorName$ edit: if I remove all $  it still works same as in search but still not working in dashboard (without any error) it returns result but DoorName field will be empty
I want to access lookup editing app using python , how can I do that?
My requirement is that my start time is January 1, 2024 and end time is January 7, 2024. In addition to placing the start and end times in multi value fields, please also include each date in this ti... See more...
My requirement is that my start time is January 1, 2024 and end time is January 7, 2024. In addition to placing the start and end times in multi value fields, please also include each date in this time interval, such as January 2, 2024, January 3, 2024, January 4, 2024, January 5, 2024, January 6, 2024. The final field content should be January 1, 2024, January 2, 2024, January 3, 2024, January 4, 2024, January 5, 2024, January 6, 2024, and July. The SPL statement is as follows: | makeresults | eval start_date = "2024-01-01", end_date = "2024-01-07" | eval start_timestamp = strptime(start_date, "%Y-%m-%d") | eval end_timestamp = strptime(end_date, "%Y-%m-%d") | eval num_days = round((end_timestamp - start_timestamp) / 86400) | eval range = mvrange(1, num_days) | eval intermediate_dates = strftime(relative_time(start_timestamp, "+".tostring(range)."days"), "%Y-%m-%d") | eval all_dates = mvappend(start_date, intermediate_dates) | eval all_dates = mvappend(all_dates, end_date) | fields all_dates
Hi, I am now adding a new action "ingest excel" to the existing SOAR App CSV Import. Two dependencies are required to be installed for this action: pandas and openpyxl. However, after adding the de... See more...
Hi, I am now adding a new action "ingest excel" to the existing SOAR App CSV Import. Two dependencies are required to be installed for this action: pandas and openpyxl. However, after adding the dependencies in App Wizard, it still show me the output  ModuleNotFoundError: No module named 'pandas' I found that in the app JSON, my dependencies in only added to "pip_dependencies" , but not  "pip39_dependencies". Is that the reason why dependency is not installed? Please advise. Thank you.        
Hello, i have started my journey in more admin activities. Currently I was attempting to add a URL (comment) under the "Next Steps" in a notable event, but it is grayed out. I currently gave my user ... See more...
Hello, i have started my journey in more admin activities. Currently I was attempting to add a URL (comment) under the "Next Steps" in a notable event, but it is grayed out. I currently gave my user all relatable privileges (so this doesn't seem to be the issue). I also tried to edit this by going through configure, content management, and then attempting to edit the search (alert) from there but while trying to edit the notable it is grayed out without the option to edit and with the comment ""this alert action does not require any user configuration". I realize it is easier to edit that part for correlation searches, but I am attempting to edit alerts not correlation searches. 
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode ... See more...
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where stoerCode IN ("K02") | stats count as periodCount by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF To explain in detail: After table the following fields are available:  importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode After stats count there are only  zbpIdentifier periodCount left. Question:  How to change the code above to get the count, and have all fields available as before? Thank you for your support.   
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. T... See more...
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. This only started occur after we migrated from On-Prem Splunk to Splunk Cloud. Before, these logs would automatically show up/parsed in JSON
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical b... See more...
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical box, the Web UI is not loading up and getting the below message Waiting for web server at https://127.0.0.1:8000 to be available................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. . Done
I have an event like this:   ~01~20241009-100922;899~19700101-000029;578~ASDF~QWER~YXCV   There are two timestamps in this. I have setup my stanza to extract the second one. But in this particula... See more...
I have an event like this:   ~01~20241009-100922;899~19700101-000029;578~ASDF~QWER~YXCV   There are two timestamps in this. I have setup my stanza to extract the second one. But in this particular case, the second one is what I consider "bad". For the record, here is my props.conf:   [QWERTY] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 43 TIME_FORMAT = %Y%m%d-%H%M%S;%3N TIME_PREFIX = ^\#\d{2}\#.{0,19}\# MAX_DAYS_AGO = 10951 REPORT-1 = some-report-1 REPORT-2 = some-report-2   The consequence of this seems to be that splunk indexes the entire file as a single event, which is something i absolutely want to avoid. Also, I do need to use linemerging as the same file may contain xml dumps. So what I need is something that implements the following logic:   if second_timestamp_is_bad: extract_first_timestamp() else: extract_second_timestamp()   Any tips / hints on how to mitigate this scenario using only options / functionality provided by splunk are greatly appreciated.