All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello All, I have log file which has the following content in json format, I would like to parse the timestamp and convert it to "%m-%d-%Y %H:%M:%S.%3N" and assign it to the same field timestamp. C... See more...
Hello All, I have log file which has the following content in json format, I would like to parse the timestamp and convert it to "%m-%d-%Y %H:%M:%S.%3N" and assign it to the same field timestamp. Can someone assist me on this on what should be props.conf and transforms.conf. i tried to use _json sourcetype but it producing none for the timestamp field. Note: I'm trying to test this locally. ``` {"level":"warn","service":"resource-sweeper","timestamp":1744302465965,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744302475969,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744302858869,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744304731808,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744304774636,"message":"1 nodes are not allocated"} ```  
As title says, I'm having trouble to establish a connection with my Openshift namespace. Whenever I enter the details and hit Save and Test, an error pops up: Setup Failed An exception was throw... See more...
As title says, I'm having trouble to establish a connection with my Openshift namespace. Whenever I enter the details and hit Save and Test, an error pops up: Setup Failed An exception was thrown while dispatching the python script handler. .   I've been searching the python logs and it seems to be related to OpenSSL: grep -B 5 -A 5 "mltk" .../var/log/splunk/python.log -> ERROR You are linking against OpenSSL 1.0.2, which is no longer supported by the OpenSSL project. To use this version of cryptography you need to upgrade to a newer version of OpenSSL. For this version only you can also set the environment variable CRYPTOGRAPHY_ALLOW_OPENSSL_102 to allow OpenSSL 1.0.2. As the error suggests, I tried to set a variable via command line, as well as through /splunk/etc/splunk-launch.conf but without success. Has anyone had this error before and knows how to solve?  
@isoutamo @PickleRick @what could be the consequences for HEC data if indexers get rolling restarts everytime? Loss of data? Loss of token? Please explain
We are collecting the sourtype of the data we are currently receiving by changing it as follows. [A_syslog] TRANSFORMS-<class_A> = <TRANSFORMS_STANZA_NAME> [<TRANSFORMS_STANZA_NAME>] REGEX = \w+\... See more...
We are collecting the sourtype of the data we are currently receiving by changing it as follows. [A_syslog] TRANSFORMS-<class_A> = <TRANSFORMS_STANZA_NAME> [<TRANSFORMS_STANZA_NAME>] REGEX = \w+\s+\d+\s+\d([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::B_syslog WRITE_META = true I want to apply timestamp for B_syslog differently here, so I'm looking for sourcetype in props.conf but I can't see it. When I change the sourcetype in the same way as above, can I get a different timestamp value only for that data?
Hi Isoutamo, Thank you for the quick response. I'll take a look at the link provided for troubleshooting. In the meantime to answer your questions... We are using DB connect 3.18.2 which is the ne... See more...
Hi Isoutamo, Thank you for the quick response. I'll take a look at the link provided for troubleshooting. In the meantime to answer your questions... We are using DB connect 3.18.2 which is the newest version. We do have the updated JRE version that is compatible installed (JDK 17).  I've confirmed Java is indeed working by running the java version command on the server. It indeed comes back with the installed version so the environmental variable is confirmed working and correct. Also, the DB connect page would not work correctly if Java is not installed. I am able to navigate to the page so we can probably rule Java being an issue out. Will post a solution/update once I go through that troubleshooting page. Kind Regards,
Hello Livehybrid, Thank you for the quick response. We did indeed try Windows Authentication. We have confirmed the permission and password is correct as we attempted to login with the account. Acce... See more...
Hello Livehybrid, Thank you for the quick response. We did indeed try Windows Authentication. We have confirmed the permission and password is correct as we attempted to login with the account. Access rights are also correct as we were able to navigate the necessary areas of the database. We also tried the SQL authentication. We even made a direct SQL account when we attempted SQL authentication (non windows auth). No luck there either. I have failed to mention in my original post the DB connect version. We have 3.18.2 installed. We also have all the necessary JDBC drivers installed. If there are any other data points I can provide to assist with getting some help on this please let me know. Kind Regards,
In some essential app security aws rules, it requires you to populate the aws_service_accounts lookup to use in exceptions, but I'm having trouble with how I can map all my aws service accounts. b... See more...
In some essential app security aws rules, it requires you to populate the aws_service_accounts lookup to use in exceptions, but I'm having trouble with how I can map all my aws service accounts. by example: https://research.splunk.com/deprecated/4d46e8bd-4072-48e4-92db-0325889ef894/ in implementation section
Hi @fraserphillips  Out of interest, did you make any upgrades or changes around March?  In terms of extracting the fields, if you arent having any joy with the wizard then if you know the values y... See more...
Hi @fraserphillips  Out of interest, did you make any upgrades or changes around March?  In terms of extracting the fields, if you arent having any joy with the wizard then if you know the values you can add these ":by hand" in either props/transforms.conf files or in the Fields page of the Splunk UI, where you can create field extractions/aliases/transforms etc https://yourSplunkinstance/en-US/manager/search/fields  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Its the value that you would expect to be a GUID isnt it? I believe the name of the HEC token can be anything. As you suggested, if you're editing direct in inputs.conf you can set any token value - ... See more...
Its the value that you would expect to be a GUID isnt it? I believe the name of the HEC token can be anything. As you suggested, if you're editing direct in inputs.conf you can set any token value - this is atleast still working in 9.4.1 anyway.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @stemerdink  Just to check - when you say it hasnt worked, is this that it excludes all eventCode 4662 or allows all? I would expect the following to work in this scenario: blacklist1 = EventCo... See more...
Hi @stemerdink  Just to check - when you say it hasnt worked, is this that it excludes all eventCode 4662 or allows all? I would expect the following to work in this scenario: blacklist1 = EventCode="4662" Message="(?s)^(?!.*(?:\{?1131f6ad\-9c07\-11d1\-f79f\-00c04fc2dcd2\}?|\{?1131f6aa\-9c07\-11d1\-f79f\-00c04fc2dcd2\}?|\{?9923a32a\-3607\-11d2\-b9be\-0000f87a36b2\}?|\{?1131f6ac\-9c07\-11d1\-f79f\-00c04fc2dcd2\}?)).*"` This should exclude EventCodee 4662 *unless* one of the GUIDs match.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Have you look this and especially those logs which are pointed here https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting BTW. your splunk version is already out of support ... See more...
Hi Have you look this and especially those logs which are pointed here https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting BTW. your splunk version is already out of support and you should update it. You didn't mention which DBX version you have? Based on JDBC version I suppose that it isn't then newest one but some pre 3.10 version?  If you are setting this from scratch I strongly suggest you to take newest version which you can run your OS + Splunk + Java combination! There have been radical change 3.10+ how JDBC drivers have packed with DBX. Even your environment is configured to use Windows domain authentication, I'm expecting that you can still create a local DB user on your MS SQL server and use it. E.g. with linux HF this is the way how it must do in most cases. r. Ismo
Hi @cmutt78_2  https://yoursplunkinstance/en-US/manager/search/data/inputs/TA-Akamai_SIEM ? You should see an empty table with a green "Add" button at the top right, something like this:   Th... See more...
Hi @cmutt78_2  https://yoursplunkinstance/en-US/manager/search/data/inputs/TA-Akamai_SIEM ? You should see an empty table with a green "Add" button at the top right, something like this:   The other thing you could try is running: /opt/splunk/bin/splunk cmd splunkd print-modinput-config TA-Akamai_SIEM TA-Akamai_SIEM This will trigger the same process as when the input is loaded by Splunk - check for any errors output here, you should end up with something that looks a bit like this: <?xml version="1.0" encoding="UTF-8"?> <input> <server_host>macdev</server_host> <server_uri>https://127.0.0.1:8089</server_uri> <session_key>sVNwheYXxxx0QNqfj_xePWwhxVbraZc6pS4FNyHQzVe2KRgv7s6tjKrZg660zYhotfG0_W62rm0UA01XkVqBX4dNUls5pA7dWyjXMRUltbsjtsA</session_key> <checkpoint_dir>/opt/splunk/var/lib/splunk/modinputs/TA-Akamai_SIEM</checkpoint_dir> <configuration/> </input>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
As @PickleRick said I suppose that best option is just set up at least two separate HF to manage actual HEC inputs. Then add LB before those.  Still you should use those two apps: one for enable HEC... See more...
As @PickleRick said I suppose that best option is just set up at least two separate HF to manage actual HEC inputs. Then add LB before those.  Still you should use those two apps: one for enable HEC interface and another for actual HEC token and maybe props and transforms conf if you need to manipulate those events.  In long run it could be even easier to manage each token in it's own app, but this it totally depending on your needs and which kind of environment you have (e.g. several dev, test, stage, UAT, prod and several integrations going on at same time). Anyhow don't use indexers as HEC receivers as there are too many times when those needs to do rolling restarts when you are managing those tokens! And You could generate a valid token with uuidgen command in any linux node or there are also some web pages too for this.
Another reason could be that your events contains timestamps from very far away each other. This also leads that buckets will close earlier than those are full. There should be some indications for... See more...
Another reason could be that your events contains timestamps from very far away each other. This also leads that buckets will close earlier than those are full. There should be some indications for reason in _internal logs or even some CMC -> Indexing -> Data quality.
Our Checkpoint Harmony logs aren't reviewed to often, today I went to look for something, and noticed nothing is parsed.  Going back in the logs, it appears sometime in March, the stream of data comi... See more...
Our Checkpoint Harmony logs aren't reviewed to often, today I went to look for something, and noticed nothing is parsed.  Going back in the logs, it appears sometime in March, the stream of data coming in drastically changed.  Might be more data coming from Checkpoint Harmony server compared to previously.  I'm trying to create custom field extractions on this data but it keeps crashing the wizard.  Just curious if anyone has any suggestions?  Thanks!
All DB Connect logs are stored into _internal index. You can found those e.g. by using source=*splunk_app_db See more from https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Troubleshooting
Actually ITSI and IT Essential Work are same product (only one download package). The only difference is that ITSI needs official license to enable those additional features. You could say that ITEW ... See more...
Actually ITSI and IT Essential Work are same product (only one download package). The only difference is that ITSI needs official license to enable those additional features. You could say that ITEW is just sales tool for ITSI
Is there a query to identify underused fields?  We are optimizing the size of our large indexes. we identified duplicates and noisy logs, but next we want to possibly find fields that arent commonly... See more...
Is there a query to identify underused fields?  We are optimizing the size of our large indexes. we identified duplicates and noisy logs, but next we want to possibly find fields that arent commonly used and get rid of them. (or if you have any additional advise on cleaning out a large index) is there a query for this?
I think that the official answer is that trellis support only 20 instance. There is at least one post which could help you (I haven't tested it). You can try this https://community.splunk.com/t5/Das... See more...
I think that the official answer is that trellis support only 20 instance. There is at least one post which could help you (I haven't tested it). You can try this https://community.splunk.com/t5/Dashboards-Visualizations/Is-there-a-way-to-display-more-than-20-charts-at-a-time-using/m-p/298549/highlight/true#M18953 and please report if it works as you need!
Are you using HEC or UF's s2s over http? Your token name is little bit weird to use as normal HEC token. Officially those format should be like GUID, but I know that at least with earlier versions als... See more...
Are you using HEC or UF's s2s over http? Your token name is little bit weird to use as normal HEC token. Officially those format should be like GUID, but I know that at least with earlier versions also other formats have worked.