All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So, I've got (unfortunately multi-line) JSON files being sent from a host to our indexers via Universal Forwarder. By using the "sourcetype=_json" in the Universal Forwarder's inputs.conf stanza, mes... See more...
So, I've got (unfortunately multi-line) JSON files being sent from a host to our indexers via Universal Forwarder. By using the "sourcetype=_json" in the Universal Forwarder's inputs.conf stanza, messages are making it to the indexers just fine.   However, I'm trying to rename the sourcetype (as well as do, what should be, some simple extractions) based on the incoming logs' "source" on the indexer side. This isn't working at all... trying to figure out why. Based on the source visible in the Splunk when looking at the events, these should be matching.   inputs.conf -- [monitor:///opt/cloud-custodian-container/.../resources.json] sourcetype = _json crcSalt = <SOURCE> index = cloud_custodian props.conf (on indexers) -- [source::.../cloud-custodian-container/.../resources.json] TRANSFORMS-cc_change_sourcetype = cc_change_sourcetype TRANSFORMS-cc_indexed_fields = cc_indexed_fields transforms.conf (on indexers) -- [cc_change_sourcetype] REGEX = \/cloud-custodian-container\/\d{12}\/[\w\-]+\/ FORMAT = sourcetype::cloud_custodian SOURCE_KEY = MetaData:Source DEST_KEY = MetaData:Sourcetype [cc_indexed_fields] REGEX = \/cloud-custodian-container\/(\d{12})\/([\w\-]+)\/ FORMAT = aws_account_id::$1 region::$2 SOURCE_KEY = MetaData:Source WRITE_META = true  
  /opt/splunk/etc/deployment-apps/indexer_config/local/indexes.conf [volume:indexer_disk_size] path = $SPLUNK_DB maxVolumeDataSizeMB = 530000 [network] homePath = volume:indexer_disk_size/network... See more...
  /opt/splunk/etc/deployment-apps/indexer_config/local/indexes.conf [volume:indexer_disk_size] path = $SPLUNK_DB maxVolumeDataSizeMB = 530000 [network] homePath = volume:indexer_disk_size/network/db coldPath = volume:indexer_disk_size/network/colddb thawedPath = $SPLUNK_DB/network/thaweddb maxDataSize = auto_high_volume maxHotIdleSecs = 86400 maxWarmDBCount = 7 frozenTimePeriodInSecs = 7776000 # 1 month = 2592000, 3 month = 7776000 I am experiencing that all of my indexes are able to individually go to a max of 500Gb which i believe is the default value. I am using Splunk 7.3.x.  Do i need to change any of my configuration settings, i suspect the volume referencing SPLUNK_DB as the path is causing issues although i have had difficulty. I have many indexes in this volume that are collectively consuming 720GB of the attempted 530GB configuration (the 530 value was used to differentiate from the baseline 500GB). Please help me understand what i have done wrong, the configurations are all done on the deployment server in indexes.conf and i am unable to see the configurations or deployment server defined index/volumes on any of my splunk monitoring consoles.  Thanks.
To get logs from either Windows or Linux path, is there a different way to use a Universal forwarder? or is it the only way?   I don't know how stupid the question is but it's better to make sure
Really stumped on this. We would like to count the number of instances of each process run on a server, and present the sum of RAM and CPU usage for all those instances of the process. Here is an ex... See more...
Really stumped on this. We would like to count the number of instances of each process run on a server, and present the sum of RAM and CPU usage for all those instances of the process. Here is an example, from the server side, of all the instances of a single app being run, that we would like to aggregate:     PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND 22661 cacheuse 1 59 0 452M 441M sleep 0:03 0.00% cache 22664 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22669 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22667 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22665 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22670 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22668 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22666 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22953 cacheuse 1 59 0 452M 444M sleep 0:03 0.00% cache 23053 cacheuse 1 59 0 452M 444M sleep 0:06 0.00% cache 23052 cacheuse 1 59 0 452M 444M sleep 0:02 0.00% cache 24543 cacheuse 1 59 0 452M 444M sleep 0:03 0.00% cache 22941 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22945 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22944 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22943 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22946 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22942 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22947 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22940 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22948 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22939 cacheuse 1 59 0 452M 440M sleep 0:06 0.00% cache 22938 cacheuse 1 59 0 452M 440M sleep 0:00 0.00% cache 22671 cacheuse 1 59 0 451M 440M sleep 0:00 0.00% cache 22663 cacheuse 1 59 0 451M 440M sleep 0:13 0.00% cache 22662 cacheuse 1 59 0 451M 440M sleep 0:00 0.00% cache 22649 cacheuse 1 59 0 444M 440M sleep 0:33 0.00% cache 22932 cacheuse 1 59 0 443M 440M sleep 0:05 0.00% cache 5863 root 17 59 0 185M 163M sleep 139:37 0.00% sstored 4570 splunk 43 59 0 177M 130M sleep 15:16 0.00% splunkd     As you can see, there are 28 instances of the cache program. We would like to roll all of that up into something like this: Program # instances total RAM total CPU cache 28 12GB 0.00% splunkd 1 177M 0.01% sstored 1 185M 0.0.1% For the top sourcetype, the VIRT sourcetype counts RAM in kilobytes.  If VIRT's integer value is greater than 1024, we want the integer multiplied by 1024 and suffixed with the letter "M" for megabytes; and if the integer is greater than 1048576, we want that integer multiplied by 1048576 and suffixed with the letter "G" for gigabytes. Here is what we've come up with so far, but it's nowhere near what we need:     index=xxxx sourcetype=top host=xxxx COMMAND!="<n/a>" | rename COMMAND as Program, pctCPU as "% CPU", USER as User | regex "% CPU"="(\d+)" | convert rmunit(VIRT) | eval inMB=if(VIRT>=1024,1,0), VIRT=floor(if(inMB=1,VIRT/1024,VIRT*1)) | chart sum(VIRT) by Program     Thank you in advance!
  When an administrator asks me what are the requirements for the Universal forlwarder, I proceed to consult the documentation and find this: Recommended Dual-core 1.5GHz+ processor, 1GB+ RAM ... See more...
  When an administrator asks me what are the requirements for the Universal forlwarder, I proceed to consult the documentation and find this: Recommended Dual-core 1.5GHz+ processor, 1GB+ RAM Minimum 1.0Ghz processor, 512MB RAM, 5GB of free disk space   The server administrator considers the requirements to be very high. Is there any more detailed information on agent consumption both at the machine and disk level?
The generic webhook alert actions does not support basic authentication.  Are there any alternatives that supports basic authentication and that is also compatible with Splunk Cloud?   
We are planning to move to Smartstore for the cold storage and we are having the on-prem multisite indexer cluster. We are having two S3 object storage created with our partners. Can I point out the... See more...
We are planning to move to Smartstore for the cold storage and we are having the on-prem multisite indexer cluster. We are having two S3 object storage created with our partners. Can I point out the two different S3 object storage to each site? (I know the suggested solution is to have a single indexer cluster should have single S3 object storage) What would be the best way to migrate from On-prem to Smartstore. Our configuration around 10 indexers with each 10 TB of data on the daily volume of 300 GB/day. Thanks in advance!
Hi Everyone, I'm looking for a working package that can move data from the Splunk cluster environment to the S3 bucket for archiving. All examples I'm getting does work.
I want to display counts by weeks . but current week's  count in "green", last weeks counts in "Orange" and counts older than two weeks in "Red". 
Hi! I have a local setup where I have splunk Enterprise, and a single universal forwarder monitoring an arbitrary Documents folder: The forwarder is set up to send entire files to splunk with these ... See more...
Hi! I have a local setup where I have splunk Enterprise, and a single universal forwarder monitoring an arbitrary Documents folder: The forwarder is set up to send entire files to splunk with these inputs.conf settings: [batch://C:\Users\Currentuser\Documents\TestSplunk] disabled = 0 sourcetype = BugReport move_policy = sinkhole index = sandbox When I place a text file into this TestSplunk directory, it does disappear, showing that the forwarder had picked it up, and disposed of the file as per the move_policy. However, from Splunk enterprise, I can't seem to see evidence of the file being received. In the splunkd.log belonging to the forwarder, I don't see any message with regards to the file that it detected/sent/deleted. How would I be able to see information about this kind of thing?
Index=X sourcetype=Y cribl_pipe=Z when I ran for 1week and 24hrs it showed index , sourcetype field with 100% Index=X sourcetype=Y cribl_pipe=Z when I ran for 2weeks and 1month  index , sourcetype ... See more...
Index=X sourcetype=Y cribl_pipe=Z when I ran for 1week and 24hrs it showed index , sourcetype field with 100% Index=X sourcetype=Y cribl_pipe=Z when I ran for 2weeks and 1month  index , sourcetype field is not showing up 100% . I'm searching for single index and single sourcetype but for 1week it's showing 100% field value, for 2 weeks it's not showing 100% what can be the issue ? How can I Identify raw events that are not indexed ( soure tcp:9997 )
Hi,  I have a  few fields in lookup from which I am trying to extract strings. I read that rex is what I should be using.  Can anyone recommend how should I go about this ?  I have attached lookup... See more...
Hi,  I have a  few fields in lookup from which I am trying to extract strings. I read that rex is what I should be using.  Can anyone recommend how should I go about this ?  I have attached lookup field and result of rex command that I want.    - Thanks  Rohan K.  
Hello, I have calculated my Total Escalations per Quarter using stats count and I would like to include another field to calculate the percentage increase/decrease of Total Escalations per Quarter. ... See more...
Hello, I have calculated my Total Escalations per Quarter using stats count and I would like to include another field to calculate the percentage increase/decrease of Total Escalations per Quarter. My query - | inputlookup Case_Database_v2.csv | rename "Case Number" AS Case_Number | search Role=Support Squad=*CMS* | lookup EAS_Escalations_v1.csv "Case Number" AS Case_Number OUTPUTNEW "Case Number" | rename "Case Number" AS EAS_Case_Number | eventstats dc(Case_Number) as CaseCount by Quarter | stats count(EAS_Case_Number) as Total_Escalations by Quarter, CaseCount | eval EAS_Escalation(Percentage)=round(Distinct_Escalations/CaseCount*100,2)   And this shows -         Quarter CaseCount Total Escalations Qtr 1 799 315 Qtr 2 889 368 Qtr 3 798 287 Qtr 4 777 220   I would like to calculate the percentage reduction per quarter, for example,  Qtr 4 Total Escalations of 220 is a 13.5% reduction on the 287 escalations from Qtr 3. Any help is much appreciated!
After Extracting fields for a source type, and spending a lot of time renaming them. I noticed I missed one. I can go to setting > fields > Field extractions, I can find my saved extraction by name... See more...
After Extracting fields for a source type, and spending a lot of time renaming them. I noticed I missed one. I can go to setting > fields > Field extractions, I can find my saved extraction by name. But is there any way to edit it?  
Recently we changed the data logging process at source and it changed the event format of the Site minder log source feeding into same source type; with the new format of data incoming and old data i... See more...
Recently we changed the data logging process at source and it changed the event format of the Site minder log source feeding into same source type; with the new format of data incoming and old data in different format. How can we manage the search time extractions which works for both the data formats for same sourcetype. New extractions we use are completely different from the old once. Any suggestions if there is a way to accomplish this will be appreciated
Search|  table ERROR_CD, HAWB, UREF, LRN, MRN, ER1_ER9_Details Splunk Alert: "$results.ERROR_CD$ $results.HAWB$"   I am using the following value for email subject ,  returning empty.. Any advise... See more...
Search|  table ERROR_CD, HAWB, UREF, LRN, MRN, ER1_ER9_Details Splunk Alert: "$results.ERROR_CD$ $results.HAWB$"   I am using the following value for email subject ,  returning empty.. Any advise how can use field values as email tokens?   
Hello everyone, There is my search : my_severity=error my_app="name" earliest=-48h latest=-24h   | stats count as nb_yesterday by my_method limit=0   | appendcols[search my_severity=error my_app=... See more...
Hello everyone, There is my search : my_severity=error my_app="name" earliest=-48h latest=-24h   | stats count as nb_yesterday by my_method limit=0   | appendcols[search my_severity=error my_app="name" earliest=-24h latest=now | stats count as nb_today by my_method]   | eval increase=round(nb_today*100/nb_yesterday)   | eval status=if(increase>100 OR nb_today>10, "CRITICAL", "GOOD")   | table my_method, nb_yesterday, increase, status, nb_today   | sort nb_today desc my_severity, my_app and my_method are fields that i created myself with my search, i get multiple results (and multiple lines) and i want to send one mail with the list of CRITICAL status like : "Hello, we notice some errors : [name of the method(1)] [status] [increase] [nb_today] [name of the method(2)] [status] [increase] [nb_today] [name of the method(3)] [status] [increase] [nb_today] ... "   Thanks.  
I'm trying to create a chart showing activity from May through until now, knowing that the activity ceased some months ago. I want the chart to continue  showing a flat line of zero from the time the... See more...
I'm trying to create a chart showing activity from May through until now, knowing that the activity ceased some months ago. I want the chart to continue  showing a flat line of zero from the time the activity stopped, rather than just stopping back in August. How would I tweak the following query to include the ceased traffic? earliest=05/01/2020:00:00:01 latest=now index=nix sourcetype="nix" src_user=JohnD host=server1  | bin _time span=1w | stats count by _time, host Thanks.
In pearson vue, how to create account, there is a field 'splunk-id'. Don't know where to get. thank you
Index=X sourcetype=Y cribl_pipe=Z when I ran for 1week and 24hrs it showed index , sourcetype field with 100% Index=X sourcetype=Y cribl_pipe=Z when I ran for 2weeks and 1month  index , sourcetype f... See more...
Index=X sourcetype=Y cribl_pipe=Z when I ran for 1week and 24hrs it showed index , sourcetype field with 100% Index=X sourcetype=Y cribl_pipe=Z when I ran for 2weeks and 1month  index , sourcetype field is not showing up 100% can some please suggest on this. I'm searching for single index and single sourcetype but for 1week it's showing 100% field value, for 2 weeks it's not showing 100% what can be the issue ?