All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In fact, if its specific data sources which you want to send to different places then you wont need to touch props/transforms - instead you can set _TCP_ROUTING in your inputs.conf stanzas, setting t... See more...
In fact, if its specific data sources which you want to send to different places then you wont need to touch props/transforms - instead you can set _TCP_ROUTING in your inputs.conf stanzas, setting the value to be the output group that you want to use, for example: == inputs.conf == [monitor:///some/path/someFile.log] index=someIndex sourcetype=myAppLogs _TCP_ROUTING=myOnPremOutputGroup [monitor:///some/path/IIS/logs] index=iis_logs sourcetype=iis:logs _TCP_ROUTING=mySplunkCloudOutputGroup Also worth reading https://community.splunk.com/t5/Getting-Data-In/Issue-with-default-outputs-when-TCP-ROUTING/m-p/509716  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Flobzh  Yes you can achieve this with multiple output groups in your outputs.conf and then props/transforms.conf to filter as required. For more details documentation and examples check out htt... See more...
Hi @Flobzh  Yes you can achieve this with multiple output groups in your outputs.conf and then props/transforms.conf to filter as required. For more details documentation and examples check out https://docs.splunk.com/Documentation/SplunkCloud/latest/Forwarding/Routeandfilterdatad  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Luckily, we didn't have to tackle this one. It was way more important to us to move the content created than to remove it later.
Hey,  Thank you for assisting. 1PB is incredible. I though 1.7 TB was a lot. I am not too sure about instance type, I am reaching out to see. As far as Smart Store, we aren't using anything of the s... See more...
Hey,  Thank you for assisting. 1PB is incredible. I though 1.7 TB was a lot. I am not too sure about instance type, I am reaching out to see. As far as Smart Store, we aren't using anything of the sort i believe. We just have retention policy to roll over cold/frozen data. Ill look over the docs regarding the smart store.    In regard to min requirements, a lot of our administration servers, Deployment Server, some of the servers handling syslog data, DMZ heavy forwarders, Cluster Managers, etc, all have around 6-8 Cores, and roughly around 6 CPUs.  The Server pictured below/above, is of our Azure  Cluster Manager. It manages a cluster, and may index itself, not sure, but it only has 8 CPU and 4 cores. Should all servers at least meet the minimum requirements? Especially with our ingestion load? I would imagine so. I can work with Support to answer any specific questions as I already have an ODS case open to handle this Splunk version upgrade. This RHEL upgrade is being pushed due to RHEL 7's support expiring.  12 physical CPU cores, or 24 vCPU at 2 GHz or greater speed per core. 12 GB RAM. A 1 Gb Ethernet NIC, optional second NIC for a management network.  
Hello, Is it possible to have only 1 Universal Forwarder installed on a Windows server and this UF sends data to 2 different Splunk instances Ex: 1- Source: IIS logs -> Dest = SplunkCloud 2- Sour... See more...
Hello, Is it possible to have only 1 Universal Forwarder installed on a Windows server and this UF sends data to 2 different Splunk instances Ex: 1- Source: IIS logs -> Dest = SplunkCloud 2- Source: event viewer data -> Dest = On Premise Splunk Enterprise If yes can you point to an article that help setup this? Other possible constraint: we have a deployment server that should allow to setup both flow.   Thanks for your help
Can anyone give me idea or script python to generate a diag file in splunk using python script login to splunk support portal and enter the case number Upload the file automatically 
Hi!   @isoutamo looped me in b/c he knows I'm currently in an Azure environment that's doing ~1Pb/day First, are you using SmartStore to offload older events to blob storage?  If so, around 1TB/da... See more...
Hi!   @isoutamo looped me in b/c he knows I'm currently in an Azure environment that's doing ~1Pb/day First, are you using SmartStore to offload older events to blob storage?  If so, around 1TB/day you're going to want to start thinking about splitting up your cluster because Azure throttles blob upload/download.  That WILL cause latency problems.  And there's also a whole bunch of SmartStore tuning you'll need to consider to minimize cache thrashing.   If you're not using SmartStore then the math goes a completely different way. Generally, what instance types are you using?  We've evaluated the following and find them are more than capable at our scale dasv5-series  dasv6-series  lsv3-series  ebsv5-series  edsv5-series  edsv6-series  If you ARE using SmartStore keep in mind that theres no concept of hot/cold,  just local disk/remote store so some of the faster local NVME may not scale up to what you need for your local cache and going for systems that don't have that and instead can scale your attached disk for your local cache is the way to go. That was our situation which is why we chose instances that don't have local disk, but allow lots of disks to be attached. If you AREN'T using SmartStore, then you'll want to look at the other instance types and leverage the NVME local disk for hot/warm and teh attached disk as cold. Beyond that, it's just a matter of picking the right size of your instance types to meet your SF/RF needs and data Ingest/Search load.   SmartStore/blob storage is really the piece that makes Azure unique.  Let me know if you are using it and we can discuss how to go about splitting your storage account(s) and possibly splitting your cluster.
I am ,    I am working alongside the unix team, as they have a better understanding of storage and resource requirements than I do. But this upgrade is long overdue.  Thanks for the assistance. I ... See more...
I am ,    I am working alongside the unix team, as they have a better understanding of storage and resource requirements than I do. But this upgrade is long overdue.  Thanks for the assistance. I think I have enough for the required requests I am filling out. 
Based on that information your cluster sizing is not even a near the enough  I’m quite sure that your environment needs something else than cpu too. In Azure there are some other limitations and r... See more...
Based on that information your cluster sizing is not even a near the enough  I’m quite sure that your environment needs something else than cpu too. In Azure there are some other limitations and recommendations which you must handle with your current volumes. Let’s see if we can get some people who have worked more with bigger Azure splunk Installations?
Yes it is. I wasn't around when these were spun up, but now that it's up to me to fix it, I want to make sure we wont run into these issues for a few years.  If we already have 10 Physical indexers t... See more...
Yes it is. I wasn't around when these were spun up, but now that it's up to me to fix it, I want to make sure we wont run into these issues for a few years.  If we already have 10 Physical indexers that handle most of the data, would I need the 96 vCPU for the other 6 in Azure, I have to consider costs with this as well.    Thanks 
You could access SCP’s REST api, but you must enable it first. Here is instructions how to do it https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud
Hey,  Thanks for your response. With this azure cluster, we receive latency alerts all of the time. Comparing the installed and available resources to the recommended, we are under resourced. We hav... See more...
Hey,  Thanks for your response. With this azure cluster, we receive latency alerts all of the time. Comparing the installed and available resources to the recommended, we are under resourced. We have a clustered environment, two clusters at a different location for both the Search Heads as well as the Indexers. We have 10 Physical indexers, each with about 60TB of storage and 48 cores and 96 CPUs. Compare these indexers to the Azure ones, which have 4 Cores and 8 CPUs.   Our Azure cluster has been giving us latency warning for a few years now, even after a few upgrades. Now that I am more comfortable with our environment, I want to finally upgrade the cpu and memory to the recommended values.    We have 6 Azure Indexers, all of which have latency issues at some point or another. We have about 100 indexes, our top 3 sources ingest 600GB daily, and we average about 1.7 TB a day.    To sum up, these are under resourced, and they need more CPU.    Thanks
Yeh I have played with those modes and those helps in many cases and currently you can do things which haven’t been possible e.g. with 7.x versions. Those are defined here https://docs.splunk.com/Doc... See more...
Yeh I have played with those modes and those helps in many cases and currently you can do things which haven’t been possible e.g. with 7.x versions. Those are defined here https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges But in curiosity how you manage those in this case that you could push initially all configurations from old SHC and later so that when user removed e.g. alerts from SHC by GUI and then it didn’t pop up after new apply without that you remove it first from deployer after initial push?
Hello, This is Krishna and I have been some POC about accessing Splunk logs through Rest API's. I was successful in calling the Rest API's through Spunk Enterprise version but in my company we have ... See more...
Hello, This is Krishna and I have been some POC about accessing Splunk logs through Rest API's. I was successful in calling the Rest API's through Spunk Enterprise version but in my company we have Splunk Cloud and so unable to call Rest API's as how I was able to do in Splunk Enterprise edition. I would like to know the details of how I can call Splunk Rest API's for Cloud edition. Below are my findings On my local instance of Splunk when I hit the below url it lists all the services available https://localhost:8089/services(it asked me for admin credentials which I provided) in which I am interested in the https://localhost:8089/services/search/jobs  so would like to call the similar ones for Cloud version   Thanks in Advance.
Hello Splunk Community! Welcome to another week of fun curated content as a part of our Splunk Answers Community Content Calendar! This week's posts are about Regex extraction, Search & Reporting... See more...
Hello Splunk Community! Welcome to another week of fun curated content as a part of our Splunk Answers Community Content Calendar! This week's posts are about Regex extraction, Search & Reporting, and Splunk Enterprise. We will be highlighting some of our Splunk users and experts who contributed in the Splunk Search board. Splunk Community Spotlight: Extracting FQDN from JSON Using Regex Regex can be a tricky beast, especially when you're trying to extract specific values from structured fields like JSON. Fortunately, the Splunk community continues to come together to solve these common challenges. The Challenge Here we're highlighting a helpful interaction from the Splunk Search board on Splunk Answers, where Karthikeya asked for assistance refining a regex pattern to extract a fully qualified domain name (FQDN) from a JSON field. The Solution The solution was simple, effective, and more broadly applicable.  Gcusello explains that regex captures everything after the prefix (such as v-) up until the trailing port number, assigning it to the field fqdn. The solution was also accompanied by a link to Regex101, allowing others to test and validate the expression. This improved version is more readable and reliable across similar patterns. Field extraction is a foundational task in Splunk whether you're indexing logs, building dashboards, or writing alerts. Getting it right the first time helps ensure your data is usable and searchable in meaningful ways. Regex plays a crucial role in this, but without best practices, you may end up with inconsistent results. Thanks to contributors like gcusello, users gain cleaner, more maintainable solutions that benefit the entire community. Splunk Community Spotlight: Joining Data Across Indexes with Field Coalescing In the fast-paced world of network monitoring and security, Splunk users often need to correlate data across multiple indexes. In this week’s Splunk Search board, we’re diving into a great question raised by a MrGlass and how yuanliu and gcusello helped unlock the solution with a clean and efficient approach. The Challenge While trying to locate some data between two indexes, when running the search over a larger time window, the data becomes inconsistent, likely due to the structure of the join and differences in field naming. The Solution Gcusello provides a great solution and yuanliu refines it to help the user. Rather than renaming fields individually or relying on joins (which can break or jumble data across time windows), Both of them offered a streamlined approach using conditional logic to cleanly unify the data. Working with multiple data sources in Splunk is common, especially in environments where network telemetry and authentication logs live in different places. Field normalization using coalesce() ensures your queries remain flexible, efficient, and easier to manage at scale. Key Takeaways Splunk’s community is a powerful learning resource,  from regex optimizations to full-scale deployment strategies. If you’re ever stuck, don't hesitate to ask a question on Splunk Answers, and you might just find a simpler, better way to solve your challenge. Shout-Out Big thanks to Karthikeya & MrGlass for raising the issue and gcusello & yuanliu for providing such clean and effective solutions. This is the kind of collaborative problem-solving that makes the Splunk Community thrive! Would you like to feature more solutions like this? Reach out @Anam Siddique on Slack or @Anam on Splunk Answers to highlight your question, answer, or tip in an upcoming Community Content post! Beyond Splunk Answers, the Splunk Community offers a wealth of valuable resources to deepen your knowledge and connect with other professionals! Here are some great ways to get involved and expand your Splunk expertise: Role-Based Learning Paths: Tailored to help you master various aspects of the Splunk Data Platform and enhance your skills. Splunk Training & Certifications: A fantastic place to connect with like-minded individuals and access top-notch educational content. Community Blogs: Stay up-to-date with the latest news, insights, and updates from the Splunk community. User Groups: Join meetups and connect with other Splunk practitioners in your area. Splunk Community Programs: Get involved in exclusive programs like SplunkTrust and Super Users where you can earn recognition and contribute to the community. And don’t forget, you can connect with Splunk users and experts in real-time by joining the Slack channel. Dive into these resources today and make the most of your Splunk journey!
"Then you must remember that if/when you have deploy those apps via deployer into SHC members you push everything into default directories. This means that if users have previously created those e.g.... See more...
"Then you must remember that if/when you have deploy those apps via deployer into SHC members you push everything into default directories. This means that if users have previously created those e.g. alerts by GUI then they cannot remove those what you have deployed by deployer. They can change those but admins must remove those from deployer and then push again into members." That's not entirely true. Whether the settings get pushed into default or local depends on the push mode. But yes, migrating contents is something that's best done with a bit of thinking, not just blindly copying files from point A to B.
No. You have 8 vCPUs whereas the minimum indexer specs show 24vCPUs. Actually it's a bit more complicated than that. Since most if not all modern CPUs include hyper-threading, OS shows more "CPUs" t... See more...
No. You have 8 vCPUs whereas the minimum indexer specs show 24vCPUs. Actually it's a bit more complicated than that. Since most if not all modern CPUs include hyper-threading, OS shows more "CPUs" than the die actually has cores. It's however not as easy to calculate performance of such setup since hyper-threading works pretty good with multithreaded applications but doesn't give that much of a performance boost with many single-threaded ones. Anyway, you most probably have a virtualized 4-core 2 threads-per-core CPU which is a really low end for a production indexer. Yes, you can run Splunk lab at home on 4 cores but if you want to put any reasonable load on the box - you'll have a lot of problems. As a rule of thumb Splunk uses 4-6 vCPUs just for single pipeline indexing. So there's not much juice left in your hardware for searching. This box is really undersized.
I want to get splunk log as actual log message in splunk server. I am getting all fields instant, thread, level, message attributes in splunk, But i don't want to get it like that.
HI, I have my json message with 4-5 json key value pairs.   I want to remove some of the fields and want to modify body before send it to splunk server. In OTEL Server i tried using file log rec... See more...
HI, I have my json message with 4-5 json key value pairs.   I want to remove some of the fields and want to modify body before send it to splunk server. In OTEL Server i tried using file log receiver to modify body and transform log statements to set body. my json contians  Body: Str({"instant":{"epochSecond":1747736470,"nanoOfSecond":616797000},"thread":"ListProc-Q0:I0","level":"DEBUG", "message":"{actual log message}}) My requirement is, i want to remove instant, thread, level fields and want to send "json value of message field, which comes dynamically" Updated body is getting printed in debug log, but still splunk server is showing original body as is. transform:\n log_statements:\n - context: log\n statements:\n \ - 'set(body, ParseJSON(body)[\"message\"])'\n - 'set(body, \"extracted\")'\n But my splunk server is showing it as is original body.    Can some one please help me with this issue.  
Thank you for the insight! I made this modification on my dashboard. However, the drilldown is still not accurately setting the token.