All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danielbb  I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with.  It is possible to get ingestion bas... See more...
Hi @danielbb  I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with.  It is possible to get ingestion based licenses for on-premises which I believe go from 1GB in chunks, so you would be able to purchase a 2GB license. I think your best option here is to speak to sales, if you dont have a contact go via https://www.splunk.com/en_us/talk-to-sales/pricing.html?expertCode=sales and tell them you wish to do a PoC etc, they should be able to provide you a proper license for the PoC versus the trial license available online. Happy Splunking!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for ... See more...
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for an on-prem environment? for something so small, does it make sense to switch and have it on the cloud?
Can you explain more detail level what is your issue with that JSON? It contains some data and you want to do with it ....
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you... See more...
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you could use it officially with Cloud too.
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how ... See more...
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how that has happened, can you e.g. dedup _raw or just set of different fields or did it needs some calculations/modifications (e.g. times) too?
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it se... See more...
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it several times to be sure! And of course you must 1st get rid of those new duplicates and ensure that all your inputs works as they should without duplicating new events. After that you probably could do that delete if you are absolutely sure that it works also in your case. And I propose that you should use some temp account which has can_delete role just for this time what is needed to do that clean up.
Hi @Harikiranjammul  If you can, restrict the first part of the search by adding (Status="*host is down* OR Status="*Some other criteria*")  Did this answer help you? If so, please consider: A... See more...
Hi @Harikiranjammul  If you can, restrict the first part of the search by adding (Status="*host is down* OR Status="*Some other criteria*")  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I dont like the idea you can not add dedup with pipelines in the simple base search of dataset.  The Splunk should offer a ready method to deduplicate index.
IMHO: I don't like or suggest you to add delete permissions to anyone permanently! It isn't great idea to run scheduled job which are removing events from splunk.
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to othe... See more...
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to other Splunk's tools/servers which are much better for that tasks. From Splunk offering you should look this https://www.splunk.com/en_us/products/infrastructure-monitoring.html for pure server monitoring.
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.c... See more...
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/IngestProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Admin/DataSelfStorage With IA, EP and IP output format is JSON, actually HEC suitable format. With IP you can also select Parquet if you want. If you are running your Enterprise in AWS, then you could configure that your frozen buckets will stored on S3 buckets. Based on your cold2frozen script you could store only raw data into S3 or more if you really want it.
Hi @asadnafees138  I assume you are using Ingest Actions to send your data to S3? If that is the case then you have the option of either raw, newline-delimited JSON or JSON format. This makes it eas... See more...
Hi @asadnafees138  I assume you are using Ingest Actions to send your data to S3? If that is the case then you have the option of either raw, newline-delimited JSON or JSON format. This makes it easier for other tools or re-ingestion in the future as it is not stored in a proprietary format. If you are ever looking to use Federated Search for S3 to search your S3 data in the future then this requires newline-delimited JSON (ndjson). For more information I'd recommend checking out the Ingest Actions Architecture docs.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form. ... See more...
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form.  Please note, i really need to know the exact logs format of data stored in Splunk. Please confirm. Regards, Asad Nafees
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full ... See more...
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full fledged monitoring app.  there are a thousand monitoring apps out there.  whats one more?
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to... See more...
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to put status=like(*host is down*)
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQ... See more...
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQwOTkuMTc0NzIzMTA3Ni4xNzQ3MjMxMDc1*FPAU*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAy*_ga*MTE2NjU0NjgxNC4xNzQ3MDQ5MDAy*_ga_5EPM2P39FV*czE3NDczOTczMzQkbzkkZzEkdDE3NDczOTkwNTckajAkbDAkaDIwMzUxMzI1NzU.*_fplc*ZWZ1MWJ5V3h4NFVZd1ZpMVJqc2xKOU1WdHo2WTNQdU1OcUlhWlE3bGpXdXU3ZENuYjJFOXppSDNCSVRvcENOcUxuaUpWSU5FUkpGaXFNMG9DN0slMkYlMkZKdUtPVWZncHhEY1lmUDQlMkI1RFJGU2NOVmhYaDFJSlpxMWszNDRHbDB3JTNEJTNE
I have an identical situation as described here <https://community.splunk.com/t5/Reporting/How-to-not-include-the-duplicated-events-while-accelerating-the/m-p/244884>
Hi @Harikiranjammul  Use an eval statement with a conditional to build the description field based on the value of status. |makeresults | eval Server="host1", Status="host is down", Threshold="unab... See more...
Hi @Harikiranjammul  Use an eval statement with a conditional to build the description field based on the value of status. |makeresults | eval Server="host1", Status="host is down", Threshold="unable to ping" | append [| makeresults | eval Db="db1", Status="database is down", Instance_status="DB instance is not available"] | eval date=strftime(_time, "%d/%m/%Y %H:%M:%S") | eval description=case( Status=="database is down", "date=" . date . " Db=" . Db . " Status=" . Status . " Instance_status=" . Instance_status, Status=="host is down", "date=" . date . " Server=" . Server . " Status=" . Status . " Threshold=" . Threshold )   This SPL checks the Status field and constructs the description field by concatenating the relevant fields for each case. Ensure your field names match exactly (case-sensitive) and are extracted correctly before using this logic.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Thank you! I changed from the table command to the fields command.  When I tried to use a drilldown again (set host_token = $row.host$), it still didn't work...any ideas?
| eval description=case(Status="host is down",Date.",".Server.",".Status.",".Threshold,Status="database is down",Date.",".Db.",".Status.",".'Instance status')