All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @fraserphillips  Out of interest, did you make any upgrades or changes around March?  In terms of extracting the fields, if you arent having any joy with the wizard then if you know the values y... See more...
Hi @fraserphillips  Out of interest, did you make any upgrades or changes around March?  In terms of extracting the fields, if you arent having any joy with the wizard then if you know the values you can add these ":by hand" in either props/transforms.conf files or in the Fields page of the Splunk UI, where you can create field extractions/aliases/transforms etc https://yourSplunkinstance/en-US/manager/search/fields  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Its the value that you would expect to be a GUID isnt it? I believe the name of the HEC token can be anything. As you suggested, if you're editing direct in inputs.conf you can set any token value - ... See more...
Its the value that you would expect to be a GUID isnt it? I believe the name of the HEC token can be anything. As you suggested, if you're editing direct in inputs.conf you can set any token value - this is atleast still working in 9.4.1 anyway.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @stemerdink  Just to check - when you say it hasnt worked, is this that it excludes all eventCode 4662 or allows all? I would expect the following to work in this scenario: blacklist1 = EventCo... See more...
Hi @stemerdink  Just to check - when you say it hasnt worked, is this that it excludes all eventCode 4662 or allows all? I would expect the following to work in this scenario: blacklist1 = EventCode="4662" Message="(?s)^(?!.*(?:\{?1131f6ad\-9c07\-11d1\-f79f\-00c04fc2dcd2\}?|\{?1131f6aa\-9c07\-11d1\-f79f\-00c04fc2dcd2\}?|\{?9923a32a\-3607\-11d2\-b9be\-0000f87a36b2\}?|\{?1131f6ac\-9c07\-11d1\-f79f\-00c04fc2dcd2\}?)).*"` This should exclude EventCodee 4662 *unless* one of the GUIDs match.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Have you look this and especially those logs which are pointed here https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting BTW. your splunk version is already out of support ... See more...
Hi Have you look this and especially those logs which are pointed here https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting BTW. your splunk version is already out of support and you should update it. You didn't mention which DBX version you have? Based on JDBC version I suppose that it isn't then newest one but some pre 3.10 version?  If you are setting this from scratch I strongly suggest you to take newest version which you can run your OS + Splunk + Java combination! There have been radical change 3.10+ how JDBC drivers have packed with DBX. Even your environment is configured to use Windows domain authentication, I'm expecting that you can still create a local DB user on your MS SQL server and use it. E.g. with linux HF this is the way how it must do in most cases. r. Ismo
Hi @cmutt78_2  https://yoursplunkinstance/en-US/manager/search/data/inputs/TA-Akamai_SIEM ? You should see an empty table with a green "Add" button at the top right, something like this:   Th... See more...
Hi @cmutt78_2  https://yoursplunkinstance/en-US/manager/search/data/inputs/TA-Akamai_SIEM ? You should see an empty table with a green "Add" button at the top right, something like this:   The other thing you could try is running: /opt/splunk/bin/splunk cmd splunkd print-modinput-config TA-Akamai_SIEM TA-Akamai_SIEM This will trigger the same process as when the input is loaded by Splunk - check for any errors output here, you should end up with something that looks a bit like this: <?xml version="1.0" encoding="UTF-8"?> <input> <server_host>macdev</server_host> <server_uri>https://127.0.0.1:8089</server_uri> <session_key>sVNwheYXxxx0QNqfj_xePWwhxVbraZc6pS4FNyHQzVe2KRgv7s6tjKrZg660zYhotfG0_W62rm0UA01XkVqBX4dNUls5pA7dWyjXMRUltbsjtsA</session_key> <checkpoint_dir>/opt/splunk/var/lib/splunk/modinputs/TA-Akamai_SIEM</checkpoint_dir> <configuration/> </input>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
As @PickleRick said I suppose that best option is just set up at least two separate HF to manage actual HEC inputs. Then add LB before those.  Still you should use those two apps: one for enable HEC... See more...
As @PickleRick said I suppose that best option is just set up at least two separate HF to manage actual HEC inputs. Then add LB before those.  Still you should use those two apps: one for enable HEC interface and another for actual HEC token and maybe props and transforms conf if you need to manipulate those events.  In long run it could be even easier to manage each token in it's own app, but this it totally depending on your needs and which kind of environment you have (e.g. several dev, test, stage, UAT, prod and several integrations going on at same time). Anyhow don't use indexers as HEC receivers as there are too many times when those needs to do rolling restarts when you are managing those tokens! And You could generate a valid token with uuidgen command in any linux node or there are also some web pages too for this.
Another reason could be that your events contains timestamps from very far away each other. This also leads that buckets will close earlier than those are full. There should be some indications for... See more...
Another reason could be that your events contains timestamps from very far away each other. This also leads that buckets will close earlier than those are full. There should be some indications for reason in _internal logs or even some CMC -> Indexing -> Data quality.
Our Checkpoint Harmony logs aren't reviewed to often, today I went to look for something, and noticed nothing is parsed.  Going back in the logs, it appears sometime in March, the stream of data comi... See more...
Our Checkpoint Harmony logs aren't reviewed to often, today I went to look for something, and noticed nothing is parsed.  Going back in the logs, it appears sometime in March, the stream of data coming in drastically changed.  Might be more data coming from Checkpoint Harmony server compared to previously.  I'm trying to create custom field extractions on this data but it keeps crashing the wizard.  Just curious if anyone has any suggestions?  Thanks!
All DB Connect logs are stored into _internal index. You can found those e.g. by using source=*splunk_app_db See more from https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting
Actually ITSI and IT Essential Work are same product (only one download package). The only difference is that ITSI needs official license to enable those additional features. You could say that ITEW ... See more...
Actually ITSI and IT Essential Work are same product (only one download package). The only difference is that ITSI needs official license to enable those additional features. You could say that ITEW is just sales tool for ITSI
Is there a query to identify underused fields?  We are optimizing the size of our large indexes. we identified duplicates and noisy logs, but next we want to possibly find fields that arent commonly... See more...
Is there a query to identify underused fields?  We are optimizing the size of our large indexes. we identified duplicates and noisy logs, but next we want to possibly find fields that arent commonly used and get rid of them. (or if you have any additional advise on cleaning out a large index) is there a query for this?
I think that the official answer is that trellis support only 20 instance. There is at least one post which could help you (I haven't tested it). You can try this https://community.splunk.com/t5/Das... See more...
I think that the official answer is that trellis support only 20 instance. There is at least one post which could help you (I haven't tested it). You can try this https://community.splunk.com/t5/Dashboards-Visualizations/Is-there-a-way-to-display-more-than-20-charts-at-a-time-using/m-p/298549/highlight/true#M18953 and please report if it works as you need!
Are you using HEC or UF's s2s over http? Your token name is little bit weird to use as normal HEC token. Officially those format should be like GUID, but I know that at least with earlier versions als... See more...
Are you using HEC or UF's s2s over http? Your token name is little bit weird to use as normal HEC token. Officially those format should be like GUID, but I know that at least with earlier versions also other formats have worked.
Other already commented this. So only some additions and clarifications. In Splunk you should think that one sourcetype is one lexical format of event. So if events have two different field amount, ... See more...
Other already commented this. So only some additions and clarifications. In Splunk you should think that one sourcetype is one lexical format of event. So if events have two different field amount, field order or even differently formatted timestamps or timestamps are in different places you should have separate sourcetypes for those. As @livehybrid shows you can extract and use different timestamp formats and evaluate those correctly with INGEST_EVAL. There are couple of examples in community and also some .conf presentations have some additional examples. The easiest way to test this is just ingest those into your test environment/test indexes and then use SPL and eval in one line to check how you can get correct format. You could see e.g.  https://community.splunk.com/t5/Getting-Data-In/Best-way-to-extract-time-from-file-name-and-text/m-p/677542 https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Those contains some examples. Also be sure if you need to use := instead of =. r. Ismo
Wait a second. Technicalities aside, it seems you're trying to do exactly opposite what you say you want to do.
I'm not 100% sure what you want to do and you're being quite vague about it. As @livehybrid already said, there are some ways to overwrite the default timestamp recognition but I'll add to it that it... See more...
I'm not 100% sure what you want to do and you're being quite vague about it. As @livehybrid already said, there are some ways to overwrite the default timestamp recognition but I'll add to it that it's needlessly complicated, might be difficult to maintain and adds extra load on the indexers since the timestamp has to be parsed twice out of the event. While dynamical routing to another index is a pretty common thing, recasting one general sourcetype to "subsourcetypes" which are slightly differently parsed into fields in search-time is also not unusual. But spllitting single sourcetype/source/host stream into completely differently treated events is typically an indication that someone didn't bother to properly classify and split the data upstream (like reading whole /var/log/messages or getting syslog from the whole environment as "syslog" sourcetype).
@cmutt78_2  Were you able to see the data input after restarting the Splunk services, or is it still missing? My Akamai Data input:- Where did you install the Akamai add-on, on the Heavy Forwa... See more...
@cmutt78_2  Were you able to see the data input after restarting the Splunk services, or is it still missing? My Akamai Data input:- Where did you install the Akamai add-on, on the Heavy Forwarder (HF)? If it's on the HF, does it have a valid license? Some features require a license, which aren't available with the Free license.  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities - and the HF will consume no license, as long as it does not index data. 2) Install the forwarder license. This will give the HF many enterprise capabilities, but not all. The HF will be able to parse and forward data. However, it will not be permitted to index and it will not be able to act as a deployment server (as an example). This is the option I would usually choose. 
It is not clear what your issue is - if you specify earliest and latest using the format you have used, they appear to be passed to a macro (that begins with "index=...") - if you don't specify an ov... See more...
It is not clear what your issue is - if you specify earliest and latest using the format you have used, they appear to be passed to a macro (that begins with "index=...") - if you don't specify an overriding time, the time specified by the search also seem to be used. Please provide more precise detail as to what your macro actually is (obfuscating as minimally as possible) and how you have used it in the search, and how you have set up the alert.
@cmutt78_2  Could you please check the splunkd.log file? It may contain information explaining why the data input from the add-on isn't appearing.  
yep, I am thinking it is an app issue