All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

LINE_BREAKER must contain a capture group. Everything before capture group is considered "previous event", capture group is treated as event breaker _and is removed from your data_ and everything aft... See more...
LINE_BREAKER must contain a capture group. Everything before capture group is considered "previous event", capture group is treated as event breaker _and is removed from your data_ and everything after the capture group is part of the "next event". Also - you still didn't say what constitutes a new event in your example.
| eval row=mvrange(0,2) | mvexpand row | eval sent=if(row=0,AMOUNT,null()) | where isnull(sent) OR sent>=250 | eval received=if(row=1,AMOUNT,null()) | eval account=if(row=0,ACCOUNT_FROM,ACCOUNT_TO) ... See more...
| eval row=mvrange(0,2) | mvexpand row | eval sent=if(row=0,AMOUNT,null()) | where isnull(sent) OR sent>=250 | eval received=if(row=1,AMOUNT,null()) | eval account=if(row=0,ACCOUNT_FROM,ACCOUNT_TO) | eventstats sum(sent) as total_sent sum(received) as total_received count(received) as count by account | fillnull value=0 total_sent total_received | where total_sent > total_received AND count > 10
Hi Team,   04/06/2024;10:08:36;Control;Machine ON 04/06/2024;10:05:39;Others;Start sample (D) ST 2 795 x1000 04/06/2024;10:05:36;Others;Sampling end ST 1 04/06/2024;10:00:25;Others;Start sample ... See more...
Hi Team,   04/06/2024;10:08:36;Control;Machine ON 04/06/2024;10:05:39;Others;Start sample (D) ST 2 795 x1000 04/06/2024;10:05:36;Others;Sampling end ST 1 04/06/2024;10:00:25;Others;Start sample (D) ST 1 781 x1000 04/06/2024;09:55:33;Operator;Operator level: 0 -> 6 UP23477 After that break the event, I written regex like   ^\d{2}\/\d{2}\/\d{4};\d{2}:\d{2}:\d{2};Operator;Operator\slevel:\s0\s->\s+6\s+\w+ but not break the event , please help me the regex query
Awesome.. Thanks @ITWhisperer  worked like a charm
What if I want to add the requirement that the amount received have to be above 250 and the number of reviced transaction have to be above 10.  The original query is index=myindex AMOUNT>=250 |event... See more...
What if I want to add the requirement that the amount received have to be above 250 and the number of reviced transaction have to be above 10.  The original query is index=myindex AMOUNT>=250 |eventstats sum(AMOUNT) as total_sent count as receive by ACCOUNT_FROM |eval temp=ACCOUNT_FROM |where receive >10 |table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent INFO temp |join type=inner temp [search index=myindex |stats sum(AMOUNT) as total_received by ACCOUNT_TO |eval temp=ACCOUNT_TO] |where total_sent > total_receive
Here is the answer - use a POST to admin/SAML-groups and add the names of the external groups and the internal roles. The English in the documentation is "sub-par" and I will be asking for it to be ... See more...
Here is the answer - use a POST to admin/SAML-groups and add the names of the external groups and the internal roles. The English in the documentation is "sub-par" and I will be asking for it to be updated. The description of the API POST call for "admin/SAML-groups" says "Convert an external group to internal roles." What it should say is, "Creates a mapping between between the external SAML group and the internal roles." This action does as my description says.
Hi @nhana_mulyana, are you sure that you are a registered Splunk Partner? you should see something like this:   Ask to your Splunk Channel manager. Ciao. Giuseppe
I am wondering why Deployment Server is full and the only stored in this server is Deployment Server Ta’s and .Conf to distribute the TA’s and Conf to Universal Forwarders. this is the Specs. Deplo... See more...
I am wondering why Deployment Server is full and the only stored in this server is Deployment Server Ta’s and .Conf to distribute the TA’s and Conf to Universal Forwarders. this is the Specs. Deployment Server - 16 CPU Core (or 32 vCPU – if VM then must be dedicated), 2 GHz+ per core or greater - 16GB RAM - 1 x 200GB storage space (for OS and Splunk) - 64-bits OS Linux/Windows - 10GB Ethernet NIC, with optional 2nd NIC for management network   but the disk Space is full in /root   Please help Thank you
When I clik Manage Button in Partner Company manage, I don't see "Download letter of Authorization" button
You need to get the values into the same event so you can do the calculation - try something like this sourcetype=log4j | rex "91032\|PRD\|SYSTEM\|test\-01\.Autodeploy\-profiles\-msgdeliver\|10\.12\... See more...
You need to get the values into the same event so you can do the calculation - try something like this sourcetype=log4j | rex "91032\|PRD\|SYSTEM\|test\-01\.Autodeploy\-profiles\-msgdeliver\|10\.12\.163\.65\|\-\|\-\|\-\|\-\|com\.filler\.filler\.filler\.message\.visitor\.MessageLoggerVisitor\|\-\|PRD01032 \- Processor (.*?) processed message with system time (?<systime_batch>.+) batch id (.*?) correlation-id \((?<corrid>.+)\) and body" | rex "com\.filler\.filler.filler\.message\.processor\.RestPublisherProcessor\|\-\|PRD01051 \- Message with correlation\-id \((?<corrid>.+)\) successfully published at system time (?<systime_mcd>.+) to MCD" | stats first(systime_batch) as systime_batch values(systime_mcd) as systime_mcd by corrid | eval diff = (systime_mcd-systime_batch)
HI @BRFZ , see the Alert Manager Enterprise app (https://splunkbase.splunk.com/app/6730). ciao. Giuseppe
Hello, Would it be possible to create a dashboard where we can receive alerts directly ?
Hi Community, I need to calculate the difference between two timestamps printed in log4j  logs of java application from two different searches, the timestamp is printed in the log after system time ... See more...
Hi Community, I need to calculate the difference between two timestamps printed in log4j  logs of java application from two different searches, the timestamp is printed in the log after system time keyword in the logs.  log for search -1 2024-07-18 06:11:23.438 INFO [ traceid=8d8f1bad8549e6ac6d1c864cbcb1f706 spanid=cdb1bb734ab9eedc ] com.filler.filler.filler.MessageLoggerVisitor [TLOG4-Thread-1-7] Jul 18,2024 06:11:23 GMT|91032|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.visitor.MessageLoggerVisitor|-|PRD01032 - Processor (Ingress Processor tlog-node4) processed message with system time 1721283083437 batch id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 correlation-id (f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001) and body (   log for search-2 2024-07-18 06:11:23.487 INFO [ traceid= spanid= ]  com.filler.filler.filler.message.processor.RestPublisherProcessor [PRD-1] Jul 18,2024 06:11:23 GMT|91051|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.processor.RestPublisherProcessor|-|PRD01051 - Message with correlation-id f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001 successfully published at system time 1721283083487 to MCD   I am using below query to calculate the time difference but end up in duplicates and lot of null values, these null values are coming only when i do the calculations for individual searches null values don't pop up. "sourcetype=log4j | rex "91032\|PRD\|SYSTEM\|test\-01\.Autodeploy\-profiles\-msgdeliver\|10\.12\.163\.65\|\-\|\-\|\-\|\-\|com\.filler\.filler\.filler\.message\.visitor\.MessageLoggerVisitor\|\-\|PRD01032 \- Processor (.*?) processed message with system time (?<systime_batch>.+) batch id (.*?) correlation-id \((?<corrid_batch>.+)\) and body" | rex "com\.filler\.filler.filler\.message\.processor\.RestPublisherProcessor\|\-\|PRD01051 \- Message with correlation\-id \((?<corrid_mcd>.+)\) successfully published at system time (?<systime_mcd>.+) to MCD" | dedup corrid_batch | eval diff = (systime_mcd-systime_batch) | where corrid_mcd=corrid_batch | table diff" Kindly help in a
If you want to dynamically deploy the app to either the manager-apps or apps directory, you can use serverclass.conf on the deployment server to do this. Note that this is quite a complex deployme... See more...
If you want to dynamically deploy the app to either the manager-apps or apps directory, you can use serverclass.conf on the deployment server to do this. Note that this is quite a complex deployment structure, make sure you keep your serverclass.conf well documented. Firstly, restore the CM's deploymentclient.conf's repository settings to default to enable the serverClass to control the target repository:   [deployment-client] repositoryLocation = $SPLUNK_HOME/etc/apps serverRepositoryLocationPolicy = acceptSplunkHome   Then on the DS you can then dynamically set the repositoryLocation using the targetRepositoryLocation  directive within serverclass.conf  at the serverClass level: For example you could have something like this:   [serverClass:CM_Deploy_to_Apps] whitelist.0 = cm.yourcompany.com targetRepositoryLocation = $SPLUNK_HOME/etc/apps stateOnClient = enabled [serverClass:CM_Deploy_to_Apps:app:example_app_1] restartSplunkd = true [serverClass:CM_Deploy_to_Apps:app:example_app_2] issueReload=true restartIfNeeded=true [serverClass:CM_Deploy_to_Manager_Apps] whitelist.0 = cm.yourcompany.com targetRepositoryLocation = $SPLUNK_HOME/etc/manager-apps stateOnClient = noop [serverClass:CM_Deploy_to_Apps:app:example_manager_app_1] [serverClass:CM_Deploy_to_Apps:app:example_manager_app_2]  
You are right - I misunderstood what you were trying to do - try this | eval row=mvrange(0,2) | mvexpand row | eval sent=if(row=0,AMOUNT,null()) | eval received=if(row=1,AMOUNT,null()) | eval accou... See more...
You are right - I misunderstood what you were trying to do - try this | eval row=mvrange(0,2) | mvexpand row | eval sent=if(row=0,AMOUNT,null()) | eval received=if(row=1,AMOUNT,null()) | eval account=if(row=0,ACCOUNT_FROM,ACCOUNT_TO) | eventstats sum(sent) as total_sent sum(received) as total_received by account | fillnull value=0 total_sent total_received | where total_sent > total_received
There is some confusion. We do not talk about  "Splunk Add-on for Cisco ESA".  I have asked for MIME Decoder Add-on for Cisco ESA  Compatibility This is compatibility for the latest version Spl... See more...
There is some confusion. We do not talk about  "Splunk Add-on for Cisco ESA".  I have asked for MIME Decoder Add-on for Cisco ESA  Compatibility This is compatibility for the latest version Splunk Enterprise Platform Version: 9.2, 9.1, 9.0, 8.2, 8.1, 8.0
I saw the following text in the documentation:   When ingesting metrics data, each metric event is measured by volume like event data. However, the per-event size measurement is capped at 150 bytes... See more...
I saw the following text in the documentation:   When ingesting metrics data, each metric event is measured by volume like event data. However, the per-event size measurement is capped at 150 bytes. Metric events that exceed 150 bytes are recorded as only 150 bytes. Metric events less than 150 bytes are recorded as event size in bytes plus 18 bytes, up to a maximum of 150 bytes. Metrics data draws from the same license quota as event data.    I'm wondering how splunk handles multi-metrics with the dimensions and tags.  Here an example:   { Tag1: Cross-Direction (CD) Type: CSV Unit: LS77100 Groupe: Traverse metric_name: LS77100.Traverse.Y1: 1.15 metric_name: LS77100.Traverse.Y2: 2.13 metric_name: LS77100.Traverse.Y3: 2.14 metric_name: LS77100.Traverse.Y4: 1.16 }    So what is count here as a Byte? So do I have to pay for every character after "metric_name:"? And what is with the Tags above: Do I pay for one tag like Tag1 or Unit in this example four times? In this example I just got four points in reality that are around 3000 points. In the moment I'm sending the information as an event to splunk. I think about to ingest them as metrics because i guess there are better in performance. Maybe another way is to send it as an event, split them and make mcollect, not sure what is the best way. 
With load balancing the Universal Forwarder sends data to all the indexers equally so that no indexer should get all the data and together the indexers holds all the data. It also provide automatic s... See more...
With load balancing the Universal Forwarder sends data to all the indexers equally so that no indexer should get all the data and together the indexers holds all the data. It also provide automatic switchover capability incase of an indexer goes down. Load balancing can be setup at UF in outputs.conf file in two ways:   By time By Volume   For time based load balancing we used autoLBFrequency setting and for volume we use autoLBVolume. Let's say I've three indexers on which I want to send data from UF. My outputs.conf file will look like below: [tcpout: my_indexers] server=10.10.10.1:9997, 10.10.10.2:997, 10.10.10.3:9997 Now, to send data for 3 minutes to an indexer, then switch to another indexer and then to another, set the autoLBFrequency like this: autoLBFrequency=180 Based on the above settings the UF will send data to indexer 10.10.10.1 for 3 minutes continuously then it will move towards the other indexers, and this loop will continue. To send data based on the volume. Let's say to configure the UF to send 1MB data to an indexer then switch to another indexer in the list, the setting will look like below autoLBVolume=1048576 In the cases of a very large file, such as a chatty syslog file, or loading a large amount of historical data, the forwarder may become "stuck" on one indexer, trying to reach EOF before being able to switch to another indexer. To mitigate this, you can use the forceTimebasedAutoLB setting on the forwarder. With this setting, the forwarder does not wait for a safe logical point and instead makes a hard switch to a different indexer every AutoLB cycle. forceTimebasedAutoLB = true To guard against loss of data when forwarding to an indexer you can enable indexer acknowledgment capability. With indexer acknowledgment, the forwarder will resend any data that the indexer does not acknowledge as "received". useACK setting is used for this purpose useACK= true The final output.conf will look like below [tcpout] useACK= true autoLBFrequency=180 autoLBVolume=1048576 [tcpout: my_indexers] server=10.10.10.1:9997, 10.10.10.2:997, 10.10.10.3:9997
But it doesn't right, does it? Your query produce total_sent is for ACCOUNT_FROM, and total_received is for ACCOUNT_TO. Since ACCOUNT_FROM and ACCOUNT_TO are two different person then where total_se... See more...
But it doesn't right, does it? Your query produce total_sent is for ACCOUNT_FROM, and total_received is for ACCOUNT_TO. Since ACCOUNT_FROM and ACCOUNT_TO are two different person then where total_sent > total_received is not make sense. 
Hi @nhana_mulyana , you go in the Partner Portal and access with your account, if you're correctly associated to your company, you can click on the Partner Company manage button. at the bottom of ... See more...
Hi @nhana_mulyana , you go in the Partner Portal and access with your account, if you're correctly associated to your company, you can click on the Partner Company manage button. at the bottom of the new dashboard you can find the "Download letter of Authorization" button. Ciao. Giuseppe