All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I would like to try using Splunk to calculate the difference in numbers from one sample to the next. Here is some theoretical log entries. The data indexed will look like this: yesterday_... See more...
Hello, I would like to try using Splunk to calculate the difference in numbers from one sample to the next. Here is some theoretical log entries. The data indexed will look like this: yesterday_timeStamp clusterId=abc, topicName=mytopic, partition=0, lastOffset=100 yesterday_timeStamp clusterId=abc, topicName=mytopic, partition=1, lastOffset=200 today_timeStamp clusterId=abc, topicName=mytopic, partition=0, lastOffset=355 today_timeStamp clusterId=abc, topicName=mytopic, partition=1, lastOffset=401 The number of events in the last 24 hours would be partition 0(355-200=155), partition 1(401-200=201) Sum of partitions for topic(mytopic) = 155+201=356 There will be many topicName(s) and possible different numbers of partition(s) per topicName. Can I use splunk to calculate the numbers of events per partition and topic  since yesterday?
Hi, I have a dashboard that works out the percentage of builds that complete inside SLO.   I would like to be able to compare this month's results to those of last month and show a trend line as to... See more...
Hi, I have a dashboard that works out the percentage of builds that complete inside SLO.   I would like to be able to compare this month's results to those of last month and show a trend line as to how much this percentage has improved (or not).  I appreciate that 'count' and timechart are normally used to show the trendline on a single value visualisation.  Is there a solution that allows a trendline from a single value that is achieved via eval? Many thanks
Hi all, I have an authorize.conf located in an application, which is usually deployed via Deployer to SH members. There is also an authorize.conf at our system/local directory ( Created by GUI). ... See more...
Hi all, I have an authorize.conf located in an application, which is usually deployed via Deployer to SH members. There is also an authorize.conf at our system/local directory ( Created by GUI). Currently these two files are bit interlinked now. Example : A role is created at App level ( deployed via deployer) , but some inheritance/reference was added via GUI, hence this entry gets logged in system/local Is there any way to identify if the role is present in system/local OR in the app??
Hello I have events that include a field of username ( and of course _time) .I would like to count how many users were added each month, but there are times with no new users were created.  I can f... See more...
Hello I have events that include a field of username ( and of course _time) .I would like to count how many users were added each month, but there are times with no new users were created.  I can find the first appearance of each user using Stats min(_time) by username And then I can use timechart to count new users by month and streamstats to get the cumulative sum. I have found how to fill the gaps if there were no new users during a month m by using the makecontinues command. what i didn't figure yet is how to fill the period before the first user creation and since the last time a user was created , until today . ... | timechart span=1mon count(username) as users | makecontinues spam=1mon _time | fillnull | streamstats sum(users) as com thanks for the help
I'd like to report an incomplete transform of RegistryValueData in Splunk_TA_microsoft_sysmon v1.0.1 Now it looks like: [sysmon-registryvaluedata] REGEX = <Data Name='Details'>\w+\s\((.+)\)</Data>... See more...
I'd like to report an incomplete transform of RegistryValueData in Splunk_TA_microsoft_sysmon v1.0.1 Now it looks like: [sysmon-registryvaluedata] REGEX = <Data Name='Details'>\w+\s\((.+)\)</Data> FORMAT = RegistryValueData::$1 So it works fine when Details contains: DWORD (0x00000001) But when it is a string value, it doesn't make sense.  What about this transform? [sysmon-registryvaluedata] REGEX = <Data Name='Details'>(?:([^(^)]*)|\w+\s\((.+)\))</Data> FORMAT = RegistryValueData::$1
Hi , I have to get the below fields extracted from these three logs to create visulisation: Fields i am interested: Event_log type,originator_username,object,username,destination,bucket_name,time,ty... See more...
Hi , I have to get the below fields extracted from these three logs to create visulisation: Fields i am interested: Event_log type,originator_username,object,username,destination,bucket_name,time,type   I have written this regex to create parser but i am not getting all the fields while writing base serach: ^(?:[^ \n]* ){2}(?P<event_log>\w+\s+[a-z_-]+)(?:[^ \n]* ){2}\{"originator_username"\:(?P<originator_username>.[a-z]+")\,"object"\:(?<object>.[a-z]+)[^,\n]*,"extra"\:\{(?P<extra>.[a-z]+)":[^,\n]*(?:[^,\n]*,){6}"time"\:(?P<time>\w+),(?:[^,\n]*,){2}"type"\:(?<type>.[a-z_]+[a-z])"}   2022-01-23 10:19:47,140 WARNING event_log EventLog: {"originator_username":"abc","object":"cluster","extra":{"username":"admin"},"object_type":"cluster","originator_uid":0,"time":164287087,"throttled_event_count":1,"obj_uid":null,"type":"failed_authentication_attempt"} 2022-01-23 07:24:05,479 INFO event_log EventLog: {"originator_username":"abcef","object":"bdb:1","extra":{"destination":{"bucket_name":"dbabucket","type":"s3","subdir":"radar2","filename":""}},"object_type":"bdb","originator_uid":0,"time":164767765,"throttled_event_count":1,"obj_uid":"1","type":"backup_succeeded"} 2022-01-23 07:15:00,294 INFO event_log EventLog: {"originator_username":"adminstrator","object":"bdb:1","object_type":"bdb","originator_uid":0,"time":1642788100,"throttled_event_count":1,"obj_uid":"1","type":"backup_started"}   Can anyone help me what neededd to be fix in regex so i can get all the needed field extracted for base search.
Hello, I need to limit the number of value shown in a multivalue column in a dashboard. This was possible using advanced xml using this option: <module name="SimpleResultsTable"> ... See more...
Hello, I need to limit the number of value shown in a multivalue column in a dashboard. This was possible using advanced xml using this option: <module name="SimpleResultsTable"> <param name="allowTransformedFieldSelect">True</param> <param name="count">10</param> But I fail to see how to do it in simple xml now that advanced xml is deprecated it should look like something like this  There is a way to do this using classic dashboard on newer version of splunk that no longer support advanced xml?
Hello, I have the next query to get data grouped by month by software version using  condition "where"     index=tst | spath path="Check"{} output=Num | where isnotnull(Num) | timechart dc(run.ID... See more...
Hello, I have the next query to get data grouped by month by software version using  condition "where"     index=tst | spath path="Check"{} output=Num | where isnotnull(Num) | timechart dc(run.ID) span=1mon by version | addtotals     I'm wondering how to display data grouped by month by version with condition "where isnotnull(Num)" with ratio of such number of events by total?  tried to do this that way.   | dedup run.ID | eventstats count(eval(isnotnull(Num))) as cnt, dc(run.ID) as total by version | eval p=(cnt/total)*100 | timechart values(p) span=1mon by version  
We are ingesting logs from imperva SQS queue from our aws enviornement. We want to use custom sourcetype for this logs i.e  "imperva:incapsula" instead of default sourcetype "aws:s3:accesslogs"  on S... See more...
We are ingesting logs from imperva SQS queue from our aws enviornement. We want to use custom sourcetype for this logs i.e  "imperva:incapsula" instead of default sourcetype "aws:s3:accesslogs"  on Splunk add-on for AWS.  We have made changes via backend on inputs.conf and restarted service, these chnages are reflecting on UI as well as could see below events in internal logs stating inputs has tagged to the new  sourcetype. But the logs are still being indexed under old sourcetype i.e. aws:s3:accesslogs. Have tried multiple things like creating a new custom input with new sourcetype, created props.conf for the new sourcetype under system/local directory but it didn't helped, the logs are still being indexed under default sourcetype "aws:s3:accesslogs" Internal logs post making changes "2022-01-28 09:33:58,959 level=INFO pid=10133 tid=MainThread logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:run:635 | datainput="imperva-waf-log" start_time=1643362438 | message="Data input started." aws_account="SplunkProdCrossAccountUser" aws_iam_role="aee_splunk_prd" disabled="0" host="ip-172-27-201-15.ec2.internal" index="corp_imperva" interval="300" python.version="python3" s3_file_decoder="S3AccessLogs" sourcetype="imperva:incapsula" sqs_batch_size="10" sqs_queue_region="**-1" sqs_queue_url="https://***/aee-splunk-prd-imperva-waf" using_dlq="1"" props.conf [imperva:incapsula] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\CEF\:\d\| NO_BINARY_CHECK=true TIME_FORMAT=%s%3N TIME_PREFIX=start= MAX_TIMESTAMP_LOOKAHEAD=128 inputs.conf [aws_sqs_based_s3://imperva-waf-log] aws_account = SplunkProdCrossAccountUser aws_iam_role = aee_splunk_prd index = corp_imperva interval = 300 s3_file_decoder = S3AccessLogs #sourcetype = aws:s3:accesslogs sourcetype = imperva:incapsula sqs_batch_size = 10 sqs_queue_region = ***-1 sqs_queue_url = https://**/aee-splunk-prd-imperva-waf using_dlq = 1 disabled = 0 Does anyone have faced similar issue? 
Can we populate the  primary index logs  to summary index . How to populate the logs from primary index to summary index. 
Hello Splunkers, I have installed splunk universal forwarder (ARMv6) on Raspbeery pi (running on Raspbeery OS 32 bit). I had enabled splunkd instance for user splunk (not an admin user) by follo... See more...
Hello Splunkers, I have installed splunk universal forwarder (ARMv6) on Raspbeery pi (running on Raspbeery OS 32 bit). I had enabled splunkd instance for user splunk (not an admin user) by following commands 1. sudo chown -R splunk: /opt/splunkforwarder 2.  sudo /opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 1 -user splunk -group splunk 3.  sudo su splunk 4. sudo systemctl start SplunkForwarder   unfortunately I get the following error: Job for SplunkForwarder.service failed because the control process exited with error code. See "systemctl status SplunkForwarder.service" and "journalctl -xe" for details SplunkForwarder.service:     #This unit file replaces the traditional start-up script for systemd #configurations, and is used when enabling boot-start for Splunk on #systemd-based Linux distributions. [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network.target [Service] Type=simple Restart=always ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Group=splunk Delegate=true CPUShares=1024 MemoryLimit=1963114496 PermissionsStartOnly=true ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target root@raspberrypi:/etc/systemd/system#   Does anyone have an idea how to enable systemd service on Raspberry pi so that splunk start automatically even with reboot? 
Hello guys, I need a help with a dropdown, basically i have a dropdown based on a previous one that returns me some choice about hostnames. Based on this hostname i need to show some panels, bu... See more...
Hello guys, I need a help with a dropdown, basically i have a dropdown based on a previous one that returns me some choice about hostnames. Based on this hostname i need to show some panels, but i would like that  till the user doesnt's choose one option from the dropdown nothing appear in the dasboard. But my token assume the "$label$" value so the depends attribute show alway what i'm trying to hide. Below the code: <input depends="$switch_type$" type="dropdown" searchWhenChanged="true" token="device_name"> <label>Switch Name</label> <fieldForLabel>switch</fieldForLabel> <fieldForValue>hostname</fieldForValue> <search> <query>"my query "</query> </search> <change> <set token="switch">$label$</set> </change> I would like that till the user doesnt select something from the dropdwon the token $switch$ is false or unset, in order to be able use it on depends condition... The value  that token can assume are not fixed so i can  not use custom condition.. Thanks for the help
Hi Community, Splunk newbie here.... I am trying to set-up a demo of Aruba/HPE Clearpass to Splunk integration. I have configured Clearpass to send syslog (udp-514) to Splunk for Audit records on ... See more...
Hi Community, Splunk newbie here.... I am trying to set-up a demo of Aruba/HPE Clearpass to Splunk integration. I have configured Clearpass to send syslog (udp-514) to Splunk for Audit records on Clearpass. I have also installed the Clearpass App in Splunk, set-up a Data Input and can see syslog events hitting the Splunk server when using Wireshark. I have also set-up a new index 'aruba' and can see that this is being populated frequently, however I do not see any events in the Splunk dashboard for the Clearpass App.  Any idea what could be causing this?  Splunk is installed on a Windows 2019 server in my home lab that is also my lab AD domain controller (I only have one server license).  Thanks
Hi at all, I installed the Check Point App for Splunk and I found a strange behaviour: at first the name is "Check Point App for Splunk" but the folder name is "TA-check-point-app-for-splunk" ,th... See more...
Hi at all, I installed the Check Point App for Splunk and I found a strange behaviour: at first the name is "Check Point App for Splunk" but the folder name is "TA-check-point-app-for-splunk" ,that's strange: it's an App or a TA? But this isn't my problem: installing this app I found that, for each event, there are some fields (date, time and rule_action) that are duplicated with the same value, in other words, for each event there is two times the same field and the same value (e.g. rule_action="allowed"). Has anyone encountered this problem? Ciao and thanks, Giuseppe
This Question got deleted.
I'm using Splunk Enterprise 8.2.4 with deployment server. I wat to push out all config/apps to my forwarders to prevent server admins adding config/apps locally. To date system admins have been creat... See more...
I'm using Splunk Enterprise 8.2.4 with deployment server. I wat to push out all config/apps to my forwarders to prevent server admins adding config/apps locally. To date system admins have been creating their own inputs and dumping data into main, flooding the license usages etc. and I need to stop this happening. I only want approved configs/inputs etc. to be pushed to the forwarders. As such, I have onboarded all my forwarders to deployment server. My first question is: Q1: How to prevent a user at the system creating an input and pushing data to the indexers? Is their a config item to only accept deployment server deployed inputs? On a test system I pushed an application I created that disabled the collection of the [WinEventLog://Security]. I found though that that system had received the app but was still pushing those events. Running btool at the forwarder shows: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf [WinEventLog://Security] C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf disabled = 0 So this seems to be the config from when the forwarder was installed ad the windows inputs were selected in the forwarder MSI installation UI. Q2: How to override this with deployment server i.e. a locally configured input not necessarily in the apps folder?  
Below is the query I am  trying to use to get the result but, its giving error  for eval statement. Could anyone please help me here. index="application_name" | spath logger | search logger=" loggin... See more...
Below is the query I am  trying to use to get the result but, its giving error  for eval statement. Could anyone please help me here. index="application_name" | spath logger | search logger=" logging.transcation.filter "| spath event | search event = "responseActivity"| search requestURI IN (/login,/api/v1/user/profile,/api/v1/app/version,/api/v1/user/profile/pickey,/api/v1/home/reseller/*) | eval requestURI=case((requestURI="/api/v1/home/reseller/*"), "/api/v1/homepage")
Hi I need to mask some timepicker items I have succeded to mask "Temps réel" with the code below but I dont succeed to mask the items with  yellow cross Could you help please?     ... See more...
Hi I need to mask some timepicker items I have succeded to mask "Temps réel" with the code below but I dont succeed to mask the items with  yellow cross Could you help please?     /* ---------------------------------------- suppression choix Temps Reel */ div[data-test="real-time-column"] { display: none; } div[data-test="other-column"] { display: none; } div[data-test-panel-id="realTime"] { display: none; }    
Hi I use a search wich is enough verbose because it queries on system events base on different tokens By default all these tokens are put on "*" index=toto sourcetype="system" site="$Site$" type=$... See more...
Hi I use a search wich is enough verbose because it queries on system events base on different tokens By default all these tokens are put on "*" index=toto sourcetype="system" site="$Site$" type=$type$ name="$name$"  I would like to know if there is a solution to reduce the disk quota and the number of events without playing with the timepicker or without playing with tokens? thanks
Hello, how can I ingest logs starting with a specific word: Sample Log Entry: SPLUNKD-123456: Hello World Hello World123 Hello World456 Hello World789 SPLUNKD-0000: Hello World SPLUNKD-0012:... See more...
Hello, how can I ingest logs starting with a specific word: Sample Log Entry: SPLUNKD-123456: Hello World Hello World123 Hello World456 Hello World789 SPLUNKD-0000: Hello World SPLUNKD-0012: Hello World Hello World0123 Hello World0456 Logs that will be ingested into Splunk: SPLUNKD-123456: Hello World SPLUNKD-0000: Hello World SPLUNKD-0012: Hello World Thanks!