All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I may use a search similar to this: index=mock_index source=mock_source | eval event = _raw | stats count as frequency by event | table event, frequency which results in a table similar to the... See more...
I may use a search similar to this: index=mock_index source=mock_source | eval event = _raw | stats count as frequency by event | table event, frequency which results in a table similar to the one below: Event Frequency 2022-08-22 13:11:12 [stuff] apple.bean.34 [stuff] 2000 2022-08-22 14:18:22 [stuff] apple.bean.86 6 [stuff] 200 2022-08-22 15:17:42 [stuff] apple.bean.1 546 [stuff] 2   Some of the tables which I get from this search give an error that states the search_process_memory_usage_threshold has been exceeded.  If I know that I am not interested in rows where the frequency is less than 1,000, is there a way to limit the table so it only shows the rows above 1,000?  Would this also improve memory usage?
I want to capture the Path (\Απεσταλμένα) and Subject (TYPICAL MAIN SHELF) .  I am using below regex Subject\W\s(?<Subject>.*)  and  rex "Path\W\s(?<Path>\W.*)"    But these are not working . I... See more...
I want to capture the Path (\Απεσταλμένα) and Subject (TYPICAL MAIN SHELF) .  I am using below regex Subject\W\s(?<Subject>.*)  and  rex "Path\W\s(?<Path>\W.*)"    But these are not working . It is not capturing the path while for subject it is capturing many more lines which are not required .   Someone please help    PH0PR07MB8510A5DC1014429F3B411EB1E39B9@PH0PR07MB8510.namprd07.prod.outlook.com> IsRecord: false ParentFolder: { [-] Id: LgAAAACYR3ou5YLkQLdwhKR5o0aGAQDzGy/hF08sRpmozaW+A2HqAAAAdHcNAAAB Path: \Απεσταλμένα } SizeInBytes: 180998 Subject: TYPICAL MAIN SHELF } LogonType: 0 LogonUserSid: S-1-5-21-2050334910-350505970-4048673702-5100548 MailboxGuid: 967cf2f1-6b52-4e79-bf98-1hnfj55667 MailboxOwnerSid: S-1-5-21-2050334910-350505970-499886553
Hi there, so I have a search which contains the field myMetric (done within field extraction). I want to show a dashboard panel presenting only myMetrics on the y-axis and time on the x-axis. I... See more...
Hi there, so I have a search which contains the field myMetric (done within field extraction). I want to show a dashboard panel presenting only myMetrics on the y-axis and time on the x-axis. I fail using "| timechart" since I am forced to use a statistic function or count (I want to show myMetric, not the count). Using "| eventstats" my first problem was that the dashboard legend shows way to many fields, but I was able to remove them using "| fields -a,b,c". But the x-axis is labeled with "Time" instead of showing concrete datetimes. So how can I archive this?
[tcp-ssl://9515] disabled=0 index = myindex connection_host = ip sourcetype = mysourcetype _TCP_ROUTING = myindexcluster   The above will allow raw events and default fields to be put i... See more...
[tcp-ssl://9515] disabled=0 index = myindex connection_host = ip sourcetype = mysourcetype _TCP_ROUTING = myindexcluster   The above will allow raw events and default fields to be put into the indexer.  The below allows indexed csv fields (structured) to be put into the indexer. The props.conf entry for the sourcetype is used by both tcp and disk file input. I am using identical csv files as data for each. Why cannot the tcp ingested csv file be indexed by the forwarder and sent to the indexer?       [batch:///data/myfolder] move_policy = sinkhole disabled = 0 index= myindex sourcetype = mysourcetype crcSalt = <SOURCE> recursive = false _TCP_ROUTING = myindexcluster
Given the below example events: Initial event: [stuff] apple.bean.carrot2donut.57.egg.fish(10) max:311 min 15 avg 101 low:1[stuff] Result event 1: [stuff] apple.bean.carrot&donut.&.egg.fish(&... See more...
Given the below example events: Initial event: [stuff] apple.bean.carrot2donut.57.egg.fish(10) max:311 min 15 avg 101 low:1[stuff] Result event 1: [stuff] apple.bean.carrot&donut.&.egg.fish(&) max:& min & avg & low:&[stuff] Result event 2: [stuff] apple.bean.carrot2donut.57.egg.fish(&) max:& min & avg & low:&[stuff] I want to get Result 2 rather than Result 1.  I want to replace any series of numbers with an ampersand only if one of three conditions are true.  These conditions: The number series is preceded by a space. The number series is preceded by a colon. The number series is preceded by an open parenthesis and followed by a closed parenthesis. If I use the replace line below, the new variable created will contain Result 1 rather than the Result 2 I desire. | eval event = replace(_raw, "[0-9]+", "&") How do I get Result 2 instead?
Hi, We run Splunk Enterprise 9.0.0 and we forgot to add an indexer inside a license pool (7 orphan_peer licensing alerts). Now we obtain the error message "have exceeded your license limit too many... See more...
Hi, We run Splunk Enterprise 9.0.0 and we forgot to add an indexer inside a license pool (7 orphan_peer licensing alerts). Now we obtain the error message "have exceeded your license limit too many times" on this indexer. We desassociated and re-associated this indexer in the pool, but the error message stays present. Could you tell me how to correcting pool_violated_peer_count ? We get the licensing alerts "Correct by midnight to avoid violation". Does it mean this license violation/limit will be remove today after midnight? Regards, Chris
Hi, Is there a way to rename a specific value in the column of the table.  For example:  
Hi all! So I am helping the networking team transition their logging to Splunk and last week I discovered the Cisco Meraki Add-on.  Also discovered that in order to install the add-on as well as conf... See more...
Hi all! So I am helping the networking team transition their logging to Splunk and last week I discovered the Cisco Meraki Add-on.  Also discovered that in order to install the add-on as well as configure any part of it; connection, inputs, etc.. that I needed a pretty high permission level.  (requires capacity: admin_all_objects) Since I am not a Splunk "admin" here at work.. I am wondering if there an existing role that might allow me to configure add-ons but not allow me to manage "all_objects"? Our actual Splunk Admin is a super busy guy so I am trying to help him out on this. I do have a higher level of access than most but all objects on an add-on seems incredibly silly. Thanks!!
Hi We have a situation in PCI compliance app. The alerts are triggered and are acknowledge. A user from the ISOC has acknowledge all the alerts  So we are trying to roll it back.  Is there is an... See more...
Hi We have a situation in PCI compliance app. The alerts are triggered and are acknowledge. A user from the ISOC has acknowledge all the alerts  So we are trying to roll it back.  Is there is any chance to do that.
Hello ,  I need to remove data at the Deployment server of my splunk cloud instance  How can i remove old data to make free space on the disk ? Thanks     
Hi there!  I've been using Splunk for a while and now i want  to use certificates to making it more secure. The problem comes when, afteer following the documentation, splunk web doesn't  starts.... See more...
Hi there!  I've been using Splunk for a while and now i want  to use certificates to making it more secure. The problem comes when, afteer following the documentation, splunk web doesn't  starts. My pem certificate has 2 certificates inside and a private key, and I also tried using the private key in a .key file and the certificates together in the pem and it neither works. Any advice or solution? Thank you!
I have a table in which one of the columns has logs like below   2022-08-21 23:00:00.877 Warning: PooledThread::run: N4xdmp29ForestCheckSchemaDBChangeTaskE::run: XDMP-XDQPNOSESSION: No XDQP session... See more...
I have a table in which one of the columns has logs like below   2022-08-21 23:00:00.877 Warning: PooledThread::run: N4xdmp29ForestCheckSchemaDBChangeTaskE::run: XDMP-XDQPNOSESSION: No XDQP session on host iuserb.nl.eu.abnamro.com, client=iuserb.nl.eu.abnamro.com, request=moreLocators, session=2026168605646879816, target=5301003730415457210   I want to extract the term "XDMP-XDQPNOSESSION" into a variable and then later use it. How to do that using regex or any other option ?  
Hi I want to extract the unique user ID for the users that are successfully logging in the KTB system [2/11/00 12:45:35:039 ISTT] 00000115 SystemOut O User Login to KTB Successful - Bhatur- NT-00... See more...
Hi I want to extract the unique user ID for the users that are successfully logging in the KTB system [2/11/00 12:45:35:039 ISTT] 00000115 SystemOut O User Login to KTB Successful - Bhatur- NT-000-TTT - PT-P065-APT [2/11/00  9:27:26:877 ISTT] 00001309 SystemOut O User Login to KTB Successful - Bhatur- AM1353P - STYLE P Harry Output should be: NT-000-TTT AM1353P  
Hello, I have query that produce a table like this: Quantity Company 4 Company_A 63 Company_B 13 Company_C   The requirement that I have to send each compa... See more...
Hello, I have query that produce a table like this: Quantity Company 4 Company_A 63 Company_B 13 Company_C   The requirement that I have to send each company their own data with the attached csv with their name, for example "report_for_Company_A.csv" , "report_for_Company_B.csv". Company A will receive their row of data with their attached file name. I try to use     |<my search that produce the table> |eval myCustomFileName = "Report_for_" + Company |ouputcsv $myCustomFileName$.csv     It does have the field myCustomFileName correct but the file output was name $myCustomFileName$.csv. How do I do it?
  The universal forwarder was used well, but one day it suddenly stopped and no longer runs. Why is this happening? The execution environment is as follows: Windows7 32bit
Dear Community Hope everyone is fine!! I am trying to change the font-size for the dashboard labels and panel titles. Can anyone suggest?
Hi everyone,  I have been facing a wired question about our alerts.  Basically the we have an alert triggers when the log contains error. The syntax looks like below:     index=[Index] _i... See more...
Hi everyone,  I have been facing a wired question about our alerts.  Basically the we have an alert triggers when the log contains error. The syntax looks like below:     index=[Index] _index_earliest=-15m earliest=-15m (host=[Hostname]) AND (level=ERR OR tag IN (error) OR ERR)     We had alert action set up to send message to Teams when it triggers. The wired thing is: The alert doesn't trigger but the search can still matches events manually. Like in the past 24 hours, we have 50 events can be matched by the search, but no alerts triggered. When I went and searched internal logs, I found the search dispatched successfully but shows     result_count=0, alert_actions=""     It looks likes the search never picked up the event to trigger an alert, but my manual search can find events.  Anyone has had similar problem before? Much appreciated
It is sort of like multiplying the set with itself and getting a subset in mathematical term.   my data is sth like this src_ip    dst_ip time X Y 1.1.1.1   2.2.2.2 1pm .. ... 2.2.2.2   3.3... See more...
It is sort of like multiplying the set with itself and getting a subset in mathematical term.   my data is sth like this src_ip    dst_ip time X Y 1.1.1.1   2.2.2.2 1pm .. ... 2.2.2.2   3.3.3.3  3pm .. ...
I have a group of 6 hosts logging into splunk but I am having trouble getting the specific log files in.  An example of the path and file is: /opt/TalendRemoteEngine/TalendJobServersFiles/jobexecut... See more...
I have a group of 6 hosts logging into splunk but I am having trouble getting the specific log files in.  An example of the path and file is: /opt/TalendRemoteEngine/TalendJobServersFiles/jobexecutions/logs/20220817205900_iC1V4/resuming_20220817205900_iC1V4.log Both the last directory name and the log filename are always going to be different each time a log is generated so I'm trying to use wildcards such as /opt/TalendRemoteEngine/TalendJobServersFiles/jobexecutions/logs/*/*.log but this is not working.  My $SPLUNK_HOME/etc/deployment-apps/Splunk_TA_nix/local/inputs.conf file in the looks like this: [monitor:///opt/TalendRemoteEngine/TalendJobServersFiles/jobexecutions/logs/.../*.log] disabled = 0 Any suggestions as to why this does not work and what I should use or try? Many thanks
How do I compare the values of the most recent event to the event before that and show only the difference? In one example, I am looking at o365 management activity with multivalue fields. I wa... See more...
How do I compare the values of the most recent event to the event before that and show only the difference? In one example, I am looking at o365 management activity with multivalue fields. I want to see the difference and know when a domain has been added to an inbound Spam Policy. Here is my base search: index=idm_o365 sourcetype=o365:management:activity Workload="Exchange" Operation="Set-HostedContentFilterPolicy" | eval a=mvfind('Parameters{}.Name', "AllowedSenderDomains"), AllowedSenderDomains=mvindex('Parameters{}.Value', a) | table _time user_email ObjectId AllowedSenderDomains | sort - _time The last two events will be this: 2022-08-15 00:00:00 user@example.com SpamPolicyName A.com;B.com;C.com 2022-08-10 00:00:00 user@example.com SpamPolicyName A.com;B.com I would like to compare these two events and only show the difference, i.e. that "C.com" was added: 2022-08-15 00:00:00 user@example.com SpamPolicyName C.com