One another comment. Don't use .../etc/system/local for (almost) anything! Create your own app and use it to store your conf files. In that way everything is working much better in long run.
| eval description=case(like(Status,"%host is down%"),Date.",".Server.",".Status.",".Threshold,like(Status,"%database is down%"),Date.",".Db.",".Status.",".'Instance status')
Currently minimum licence for Cloud is 5GB and it's always for minimum one year. If you want to test with cloud you must use Splunk's free trial which is 14days and you cannot migrate/convert this to...
See more...
Currently minimum licence for Cloud is 5GB and it's always for minimum one year. If you want to test with cloud you must use Splunk's free trial which is 14days and you cannot migrate/convert this to official stack after 14d period. In onprem minimum license is 1GB/day. You could use it for single node or create even multisite cluster with it or anything between those options. With your ingested data amount there is no option to go svc based license in onprem even there is this splunk's offering. This needs much higher daily ingestion amount to be reasonable priced vs. ingestion based license. Currently SCP (splunk cloud license) are quite nicely priced vs. onprem + hw/virtual capacity/management stuff needed for those so I definitely look SCP for production probably even earlier than 5GB daily base ingestion. Of course it depends on what kind of environment you have and how splunk will be managed there etc. And of course you quite probably need some nodes (e.g. DS + IHF + DBX HF etc.) into onprem too?
Hi regex101.com is your friend, when you need to start with regex. Here is your example https://regex101.com/r/Tu8JB5/1 In splunk you have couple of ways to get this done. Use rex as @livehybrid ...
See more...
Hi regex101.com is your friend, when you need to start with regex. Here is your example https://regex101.com/r/Tu8JB5/1 In splunk you have couple of ways to get this done. Use rex as @livehybrid shows and create that rex e.g. by regex101.com Use splunk | makeresults
| eval _raw ="2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455"
| multikv noheader=t
``` Above prepare sample data ```
| rex field=_raw "Service ID - (?<serviceID>.*$)"
| table serviceID use Splunk with rex | makeresults
| eval _raw ="2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455"
| multikv noheader=t
``` Above prepare sample data ```
| erex serviceID examples="zywstrf,abc123f"
| table serviceID Use Splunk's "Extract new field" feature under "Interesting fields" and then select regex and follow those instructions. There are two more places in GUI where you could found this same functionality Please accept solution for answer which helps you to solve this issue. That way also other people will know what to do when they are looking an answer for same issue.
Hi @danielbb I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with. It is possible to get ingestion bas...
See more...
Hi @danielbb I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with. It is possible to get ingestion based licenses for on-premises which I believe go from 1GB in chunks, so you would be able to purchase a 2GB license. I think your best option here is to speak to sales, if you dont have a contact go via https://www.splunk.com/en_us/talk-to-sales/pricing.html?expertCode=sales and tell them you wish to do a PoC etc, they should be able to provide you a proper license for the PoC versus the trial license available online. Happy Splunking! Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for ...
See more...
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for an on-prem environment? for something so small, does it make sense to switch and have it on the cloud?
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you...
See more...
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you could use it officially with Cloud too.
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how ...
See more...
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how that has happened, can you e.g. dedup _raw or just set of different fields or did it needs some calculations/modifications (e.g. times) too?
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it se...
See more...
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it several times to be sure! And of course you must 1st get rid of those new duplicates and ensure that all your inputs works as they should without duplicating new events. After that you probably could do that delete if you are absolutely sure that it works also in your case. And I propose that you should use some temp account which has can_delete role just for this time what is needed to do that clean up.
Hi @Harikiranjammul If you can, restrict the first part of the search by adding (Status="*host is down* OR Status="*Some other criteria*") Did this answer help you? If so, please consider: A...
See more...
Hi @Harikiranjammul If you can, restrict the first part of the search by adding (Status="*host is down* OR Status="*Some other criteria*") Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I dont like the idea you can not add dedup with pipelines in the simple base search of dataset. The Splunk should offer a ready method to deduplicate index.
IMHO: I don't like or suggest you to add delete permissions to anyone permanently! It isn't great idea to run scheduled job which are removing events from splunk.
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to othe...
See more...
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to other Splunk's tools/servers which are much better for that tasks. From Splunk offering you should look this https://www.splunk.com/en_us/products/infrastructure-monitoring.html for pure server monitoring.
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.c...
See more...
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/IngestProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Admin/DataSelfStorage With IA, EP and IP output format is JSON, actually HEC suitable format. With IP you can also select Parquet if you want. If you are running your Enterprise in AWS, then you could configure that your frozen buckets will stored on S3 buckets. Based on your cold2frozen script you could store only raw data into S3 or more if you really want it.
Hi @asadnafees138 I assume you are using Ingest Actions to send your data to S3? If that is the case then you have the option of either raw, newline-delimited JSON or JSON format. This makes it eas...
See more...
Hi @asadnafees138 I assume you are using Ingest Actions to send your data to S3? If that is the case then you have the option of either raw, newline-delimited JSON or JSON format. This makes it easier for other tools or re-ingestion in the future as it is not stored in a proprietary format. If you are ever looking to use Federated Search for S3 to search your S3 data in the future then this requires newline-delimited JSON (ndjson). For more information I'd recommend checking out the Ingest Actions Architecture docs. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form. ...
See more...
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form. Please note, i really need to know the exact logs format of data stored in Splunk. Please confirm. Regards, Asad Nafees
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full ...
See more...
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full fledged monitoring app. there are a thousand monitoring apps out there. whats one more?
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to...
See more...
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to put status=like(*host is down*)
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQ...
See more...
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQwOTkuMTc0NzIzMTA3Ni4xNzQ3MjMxMDc1*FPAU*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAy*_ga*MTE2NjU0NjgxNC4xNzQ3MDQ5MDAy*_ga_5EPM2P39FV*czE3NDczOTczMzQkbzkkZzEkdDE3NDczOTkwNTckajAkbDAkaDIwMzUxMzI1NzU.*_fplc*ZWZ1MWJ5V3h4NFVZd1ZpMVJqc2xKOU1WdHo2WTNQdU1OcUlhWlE3bGpXdXU3ZENuYjJFOXppSDNCSVRvcENOcUxuaUpWSU5FUkpGaXFNMG9DN0slMkYlMkZKdUtPVWZncHhEY1lmUDQlMkI1RFJGU2NOVmhYaDFJSlpxMWszNDRHbDB3JTNEJTNE