All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This info actually matches the data from the CMC, the only issue I have is that you can't group the volume by index (although I can group by splunk_server/indexer).
Tried this already and got this error: Invalid value "$TimeRange.earliest$" for time term 'earliest'
Hi @smallwonder  Currently there is no option to limit data sent to splunk after reaching certian limit. you can try filter the data which i mentioned earlier post.
@Jamietriplet wrote: index=Index name sourcetype=sourcetype name (field names)earliest=$TimeRange$ latest=now() index=Index name sourcetype=sourcetype name (field names)earliest=$TimeRange.ea... See more...
@Jamietriplet wrote: index=Index name sourcetype=sourcetype name (field names)earliest=$TimeRange$ latest=now() index=Index name sourcetype=sourcetype name (field names)earliest=$TimeRange.earliest$ latest=$TimeRange.latest$
index=Index name sourcetype=sourcetype name (field names)earliest=$TimeRange$ latest=now()
index=Index name sourcetype=sourcetype name (field names)earliest=$TimeRange$ latest=now()
I am new to Splunk but spent a log time with Unifi kit. I am on the latest version of Unifi controller with a config for SIEM integration with Splunk. I have installed Splunk on a Proxmox VM using Ub... See more...
I am new to Splunk but spent a log time with Unifi kit. I am on the latest version of Unifi controller with a config for SIEM integration with Splunk. I have installed Splunk on a Proxmox VM using Ubuntu 24.04.   Is there a step-by-step guid on how to ingest my syslog data from Unifi into Splunk please?  Regards,   BOOMEL
I had a similar problem, hopefully, this helps. You may have to create a Splunk User   - adduser splunk
Your panel will have a search (data source) associated with it - how is that data source configured (with respect to timeframe)?
There is a search panel you are trying to pass the variables to.  The panel that gives an error when trying to use the token values.
Standard format of data ingestion with default setup sending data via HEC token, Data getting ingested non-human readable format. Tried creating a new token and sourcetype but still no luck. Please a... See more...
Standard format of data ingestion with default setup sending data via HEC token, Data getting ingested non-human readable format. Tried creating a new token and sourcetype but still no luck. Please advise what else should we do differently to get proper format.   12/3/24 9:21:58.000 AM   P}\x00\x00\x8B\x00\x00\x00\x00\x00\x00\xFFE\x90\xDDn\x9B@\x84_eun\xF6\xA2v}\xF6\xD8;lo$W\xDEM\xD5 sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xB9\xB7\xE6\xA0sV\xBA\xA0\x85\xFF~H\xA4[\xB31D\xE7aI\xA8\xFDe\xD7˄~\xB5MM\xE6>\xDCAIh_\xF5ç\xE0\xCCa\x97f\xC9V\xE7XJ o]\xE2\xEE\xED{3N\xC0e\xBA\xD6y\K\xA3P\xC8&\x97\xB16\xDDg\x93Ħ\xA0䱌C\xC5\xE3\x80~\x82\xDD\xED\xAD\xD39%\xA1\xEDu\xCE\x9F35\xC7y\xF0IN\xD6냱\xF6?\xF8\xE3\xE0\xEC~\xB7\x9Cv\x9D\x92 \x91\xC2k\xF9\xFANO   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   Y7'BaRsԈd\xBA\x88|\xC1i.\xFC\xD6dwG4\xA1<iᓕK\xF7ѹ* ]\xED\xB3̬-\xFC\xF4\xF7eb   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   .e #r.\xA4P\x9C\xB1(\x8A# \xA98\x86(e\xAC\x82\xB8B\x94\xA1`(ac{i\x86\xB1\xBA\A3%\xD3r\x888\xFB\xF73\xD0\xE0n   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   "   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   3néo\xAFc\xDB\xF9o\xEDyl\xFAto\xED\xF3\xB1\x9B\xFFn}3\xB4\x94o$\xF3\xA7\xF1\xE3dx\x81\xB6   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \x98`_\xAB[   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   &9"!b\xA3 host = http-inputs-elosusbaws.splunkcloud.com source = http:aws_vpc_use1_logging sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xD5Ӱ\xE8\xEBa\xD1\xFAa\xAC\xFC\xA9Yt}u:7\xF5â\xBA\xD5\xED\xF8\xEE\xB6c\xDFT\xD0\xF0\xF3`6κc\xD7WG19r\xC98   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xAA\x80+\x84\xC8b\x98\xC1\xB9{\xDC\xF4\xDD\xED   sourcetype = aws:cloudwatchlogs:vpcflow
I have always preferred the roll over summary generated once daily. index=_internal source=*license_usage.log* type=RolloverSummary https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooti... See more...
I have always preferred the roll over summary generated once daily. index=_internal source=*license_usage.log* type=RolloverSummary https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/WhatSplunklogsaboutitself https://docs.splunk.com/Documentation/Splunk/latest/Admin/Shareperformancedata  
Check what roles are inherited like "user" which would carry up the ability to create a dashboard.  Please check which version you have, I believe in version 9.3.x you should look for this. [capabil... See more...
Check what roles are inherited like "user" which would carry up the ability to create a dashboard.  Please check which version you have, I believe in version 9.3.x you should look for this. [capability::edit_view_html] * Lets a user create, edit, or otherwise modify HTML-based views. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/authorizeconf  
Hi @smallwonder  In addition to @gcusello  said If you want to reduce the data ingested into Splunk like removing some log events you can also try ingest actions. (similar to null queue)  ... See more...
Hi @smallwonder  In addition to @gcusello  said If you want to reduce the data ingested into Splunk like removing some log events you can also try ingest actions. (similar to null queue)  https://docs.splunk.com/Documentation/Splunk/latest/Data/DataIngest#Filter_with_regular_expression This can be done on heavy forwarders, it's an UI based and easy to navigate. Also in case of  monitoring new log files you can try to add ignoreolderthan to avoid ingesting older  specified time.
The UF agent has a certificate based secure communications back to the HF or Indexing tier.  The default certificates at install are the same across all installs so are not secure until you place you... See more...
The UF agent has a certificate based secure communications back to the HF or Indexing tier.  The default certificates at install are the same across all installs so are not secure until you place your own certificates.  Beyond that I do not know of any transmission checks so you need to rely on the assumption that with proper encryption that no one is touching the data in transit.
Can I just specify the maximum amount of data I want to send over for that day. If it reaches say 1gb of data per day it will stop forwarding until the next day.
Hi @smallwonder , when you say limit the amount of data, are you meaning: limiting the files to read or filter events? if limiting the files to read, you can add whitelist and blacklist options to ... See more...
Hi @smallwonder , when you say limit the amount of data, are you meaning: limiting the files to read or filter events? if limiting the files to read, you can add whitelist and blacklist options to your inputs.conf. If instead you want to filter sone data, you have to identify one or more regexes to filter your logs (positive or negative filtering), and then apply the method described at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Routeandfilterdatad Remember that these filters must be applied in the first full Splunk instance they are passing through, in other words on the first Heavy Forwarder present or on Indexers, not on Universal Forwarders. Ciao. giuseppe
Do you mean the control panel?
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to lim... See more...
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to limit the log files from for example WinFIM from exceeding the data.
Reading through the Ideas, there are a few written different ways that will yield the same result. This is the simplest explanation, https://ideas.splunk.com/ideas/PLECID-I-606. If we can use * as a ... See more...
Reading through the Ideas, there are a few written different ways that will yield the same result. This is the simplest explanation, https://ideas.splunk.com/ideas/PLECID-I-606. If we can use * as a literal, then it will help your problem too. What would be best is to be able to implement a regex statement. At my shop, it would be ok to do index=ABCDE*, but not index=A*.