All Topics

Top

All Topics

Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this ... See more...
Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this is either not possible or a bad idea due to possible modifications that have been done by splunk. Is there a documented way to upgrade to MongoDB 6.0 or newer? Thanks.
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is on... See more...
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is one of the fields):  "level":"info",   I have tried below and it does not work, can someone help if this is correct or is there another way, the below is in heavy forwarder props: [abc] TRANSFORMS-null = infonull   transforms [infonull] SOURCE_KEY = level REGEX = info DEST_KEY = queue FORMAT = nullQueue
Hi, I would like to fix this alert to show the results in the value columm as a double digit percentage and have the percent sign after the number. for example  27%.  Thank you in advance for your he... See more...
Hi, I would like to fix this alert to show the results in the value columm as a double digit percentage and have the percent sign after the number. for example  27%.  Thank you in advance for your help.  MY SPL index=perfmon object=Memory "% Committed Bytes In Use" | where Value < 25 | table _time, host, Value, date | dedup host   My Result.   
We got output in table but all values are in one column  for each fields of output table. We want to split values in row. Below is the output table for reference. Please help to split it.   
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment serve... See more...
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment server"? Can any body help about port requirement?   
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR hos... See more...
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR host=host2 source=logpath | transaction startswith=protocol=LDAP | search BIND REQ NOT "protocol=LDAPS" NOT  | dedup "uid" If i uses the above query in a table am getting two values in a row and again for other timestamp the same value got repeated even though am using dedup .  I have tried consecutive=true. In the UID column am seeing duplicates still. results came like this: timestamp uid 2023-12-12T05:44:23.000-05:00  abc xyz 2023-12-12T05:45:20.000-05:00 abc efg 123 2023-12-12T05:45:20.000-05:00 xyz 456 efg   I need each value in single row and no duplicates should displayed. Help will much appreciated!!!  
I have two different logs where the error is capturing in different fields in each log message...(error_message and error_response) I have to capture the error_message and error_response without d... See more...
I have two different logs where the error is capturing in different fields in each log message...(error_message and error_response) I have to capture the error_message and error_response without dropping the other logs.? Log 1 : message:"Lambda execution: exit with failure", message_type:"ERROR", error_message:"error reason update" Log 2 : message:"Lambda execution: exit with failure", message_type:"ERROR", error_response:"updated error reason" Expected Output : Error                                                   count 1. error reason update                  1 2. updated error reason                1
Hi I want to execute different SPL query in Dashboard studio panel on the basis of dropdown value. Drop down have two item only, if we select "Item1" in dropdown then in particular panel of Dashboa... See more...
Hi I want to execute different SPL query in Dashboard studio panel on the basis of dropdown value. Drop down have two item only, if we select "Item1" in dropdown then in particular panel of Dashboard should execute "Query1" if selected "item2" in dropdown then in same panel of Dashboard studio should execute "Query2" item1 = "Aruba NetWorks" Item2 = "Cisco" Query1 = index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| | eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search node = $<dropdown token>$ | table  node_dns node_ip region Query2 = index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search node = $<dropdown token>$ | table  Name Kindly Guide. Thanks Abhineet Kumar
Hi, I'm using: loadjob savedsearch because my query is big and it takes time to load. I have some multi-select filters and i want to add input time range filter. (| loadjob savedsearch="mp:search:... See more...
Hi, I'm using: loadjob savedsearch because my query is big and it takes time to load. I have some multi-select filters and i want to add input time range filter. (| loadjob savedsearch="mp:search:queryName" | where $pc$ AND  $Version$ ) I'm not sure how to do that because i need to use a field called: Timestamp (i get it in my query, this is the time the event is written to the json file ) and not  the _time field. In addition, I don't know how to use loadjob savedsearch with time range filter Can you help me, please? Thank, Maayan
I have gone through a few questions which are related to lookup file changes. I tried to use the same query to get the internal logs regarding my lookup file changes but I am unable to fetch any logs... See more...
I have gone through a few questions which are related to lookup file changes. I tried to use the same query to get the internal logs regarding my lookup file changes but I am unable to fetch any logs. I would like to know where can I find the information about the changes made to my lookup file. The information is more related to the user who modified and the respective time. I tried to search in _audit index, but I am unable to find the exact logs (may be the way of my searching is wrong) Could anyone please help me in finding the history of modification/changes made to any lookup file?
Hi,  I want to export browser test results in some sort of csv or any file where I can see the performance of a browser test for the past year or month. How can this be possible?
How to get difference of  lastest value with now i have multiple values in latest column and only one value in now column i want output as difference  latest now 1701973800.000000 170145... See more...
How to get difference of  lastest value with now i have multiple values in latest column and only one value in now column i want output as difference  latest now 1701973800.000000 1701455400.000000 1701455400.000000 1700418600.000000 1700418600.000000 1702372339   1701973800.000000- 1702372339 = 1701455400.000000- 1702372339=  like this 
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2... See more...
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2023-12-08T00:00:00" }, { "API_NAME": "mcbhsa", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "owbaha", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "pdjna7aha", "DEP_DATE": "2023-11-20T00:00:00" } ]     I want to extrcat dep_date and apiname in separate row DEP_DATE API_NAME 2023-12-08T00:00:00 wurfbdjd   mcbhsa  
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command. ... See more...
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command.   the error message is :   Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found.     what should I do to solve this problem?
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Sou... See more...
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Source = source 6 Source = source 7 Now i have a requirement to create an alert search with only first 4 source and exclude the remaining three source 5,6,7 I tried using below query Index = Index1 source IN ("source 1","source 2","source 3","source 4") when i tried to exclude 4,5,6 source,getting error.Can you help on this? Index = Index1 source IN ("source 1","source 2","source 3","source 4") source  NOT IN ("source 4","source 5","source 3","source 6") or Index = Index1 source ! IN ("source 4","source 5","source 6") source IN ("source 1","source 2","source 3","source 4") source ! IN ("source 4","source 5","source 3","source 6")
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I... See more...
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I can set an alert for the offline status, I'm a bit confused about including the recovery status. Can you please assist me in configuring the alert for both scenarios?
What user permissions must be granted when installing Java Agent binaries? While installing Java Agents, it is mandatory for the user running JVM to have certain access permissions for agent binari... See more...
What user permissions must be granted when installing Java Agent binaries? While installing Java Agents, it is mandatory for the user running JVM to have certain access permissions for agent binaries. In this article, find an approach for assigning these permissions, which can be validated in the best order for the environment in which Agents are being installed.  In this article...  How do I assign mandatory permissions for agent binaries? Examples  Additional resources  How do I grant required user permissions when installing agent binaries?  While installing Java agents, it is mandatory for the user running JVM to have certain access permissions to agent binaries.   The following approach can be validated for use in the best order for the environment in which Agents are being installed. For example, you can make slight adjustments based on the OS version, type, and allowed security privileges at the environmental level.  The user must have write privileges to the  conf  and  logs  directories in the Java Agent home. One way to achieve this is to install the agent as the same user that owns the JVM. Provide admin or 777 to agent binaries recursively. Writable permissions to conf/logs and read permissions for all files recursively. Executable permissions to the javaagent.jar file being referenced in the installation. For example: For RHEL9 with SE Linux turned ON Executable permissions to javaagent.jar will be needed for all users. ( chmod a+rx <path>/javaagent.jar ) For IBM WAS running on AIX/UNIX/Linux Try set agent binaries user as JVM/WAS running user to avoid user permissions conflicts.   Additional resources See Install the Java Agent in the documentation 
Hi all, I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster.  Splunk Enterprise 9.1.2 and DB Connect 3.15.1.  The configs repli... See more...
Hi all, I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster.  Splunk Enterprise 9.1.2 and DB Connect 3.15.1.  The configs replicate fine across the members and I am running several inputs.  It appears that all of the inputs so far are running on the captain only.  I am wondering if this is normal behavior, and if the captain will start distributing input jobs to other members once it is maxed out? I am running this search to see the input jobs: index=_internal sourcetype=dbx_job_metrics connection=* host IN (abclx1001,abclx1002,abclx1003) | table _time host connection input_name db_read_time status start_time end_time duration read_count write_count error_count | sort - _time All inputs are successful, and the host field is always the same - it is the captain. The other members give me messages like this: 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.045278310775756836 s 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain' 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input 127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 41 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.04212641716003418 s 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain' 127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 38 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input 2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request Thoughts?  Is the shc supposed to distributed these inputs the way it would distribute scheduled searches?
How can I mask the verfiication code using props/transforms? {"body": " Verification Code: 123456",   I want to mask the code using props and transforms using below format, not sure how the search... See more...
How can I mask the verfiication code using props/transforms? {"body": " Verification Code: 123456",   I want to mask the code using props and transforms using below format, not sure how the search spl regex is different than regex in transforms props.conf [source::abc] TRANSFORMS-anonymize = abc-anonymizer transforms.conf [abc-anonymizer] DEST_KEY = _raw REGEX =  FORMAT = $1######$2        
[1pm PT / 4pm ET] - Register here. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Cloud Platform and Edge Processor (Workshop Special) on Wed, Jan 17, 2024 a... See more...
[1pm PT / 4pm ET] - Register here. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Cloud Platform and Edge Processor (Workshop Special) on Wed, Jan 17, 2024 at 1pm PT / 4pm ET.   We will start this Office Hours session with a special workshop demo on Edge Processor. Then, we will address any pre-submitted (or live) questions related to getting data into Splunk Cloud Platform or using Splunk Edge Processor. Including: How to configure and deploy Edge Processor Building SPL2 pipelines in Edge Processor Use cases that Edge Processor can help with (reducing firewall logs, enriching events, making PII data, routing to S3 for low-cost storage, etc.) Getting syslog data in or getting data in via HEC How to filter, mask, enrich, and route your data Anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.    Look forward to connecting!