All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cann... See more...
Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cannot be queried directly. Has anyone had this issue in the past? How did you fix it? What other alternatives are there?
@ITWhispererIt wasn't obvious at first glance for me either but if you scroll back "report_to_map_through_indexes" was actually a name of a saved search used in the solution. @PetermannAs you can se... See more...
@ITWhispererIt wasn't obvious at first glance for me either but if you scroll back "report_to_map_through_indexes" was actually a name of a saved search used in the solution. @PetermannAs you can see in the docs for the map command, it takes either a literal search as an argument or a name of a saved search. In this case @ejwade used the latter option. The map command references a report_to_map_through_indexes report definition of which is shown below in the original solution.
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is curren... See more...
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unreachable." Table & Stats showing: Message=| RO76 | PXS (DTI) - Server - Windows Server Down Critical | Server it breaking after " sign.
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could rev... See more...
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could review the sizing and let me know if anything looks misaligned or could be optimized based on Splunk best practices. Overview of each plan: Plan A: Daily ingest: 2.0TB Retention: same 10 Indexers 3 Search Heads 2 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes Plan B: Daily ingest: 2.6TB Retention: same 13 Indexers 3 Search Heads 3 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes As I told Each plan includes CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes. Example specs per Indexer (Plan C): Memory: 128GB vCPU: 96 cores Disk: 500GB OS SSD + 6TB hot SSD + 30TB cold HDD + 11TB frozen (NAS) ---------------------------------------- What I'm looking for: Are these hardware specs reasonable per Splunk sizing guidelines? Is the number of indexers/search heads appropriate for the daily ingest and retention? Any red flags or over/under-sizing you would call out? Thanks in advance for your insights!
Hi @kn450 , Having the same issue, did you find a solution for this? Thank You!
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this lic... See more...
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this license does not support being a remote master".   I've installed a developer license and it shows 'can be remote', so not sure why I cannot connect a peer to it.  On the LM it lists 4 licenses and the 'dev' one is #2, do I need to change the license group to active the 'dev' license?    
Hi @danielbb  No, you can only use those items in the dropdown. If you try and "Advanced Edit" the alert to use a field you get a validation error: The only other thing you might be able to do ... See more...
Hi @danielbb  No, you can only use those items in the dropdown. If you try and "Advanced Edit" the alert to use a field you get a validation error: The only other thing you might be able to do is manually edit the savedsearches.conf and *try* using a field returned in there, however Your Mileage May Vary. This would also introduce management issues regarding the alert as it might make it impossible to edit in the UI - so whilst Im saying it might be possible, I wouldnt recommend it i'm afraid.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @danielbb , instead a scheduled report, use an alert that fires if results is greater than 0. Ciao. Giuseppe
Hi @danielbb , could you better describe your request? are you speaking of Splunk Enterprise or Enterprise Security? ciao. Giuseppe
Running version 9.3, the log-local.cfg doesn't seem to be applied. Even after a restart, Splunk is throwing >10 of these INFO lines per second. This message should probably be moved to the DEBU... See more...
Running version 9.3, the log-local.cfg doesn't seem to be applied. Even after a restart, Splunk is throwing >10 of these INFO lines per second. This message should probably be moved to the DEBUG category...    It is possible there's another issue with my instances, but this mess of logs is making it very hard to troubleshoot. `splunk set log-level TcpInputProc -level WARN`  does work Modifying log.cfg also works    
We would like to dynamically populate the severity field, is it possible?  
Hi @danielbb  If you want to be able to conditionally run the email alert action then it needs to be an Alert rather than a report. This allows you to only send if the number of results > 0. What a... See more...
Hi @danielbb  If you want to be able to conditionally run the email alert action then it needs to be an Alert rather than a report. This allows you to only send if the number of results > 0. What are the customers reservations about having an alert vs report? They are pretty much the same thing.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is there a way to avoid sending an empty report? I'm thinking about converting the report to an alert but the customer would like to keep it as a report. 
hi there, did you end up finding a resolution to this?
Have you checked the search log or splunkd.log on the remote search head?
Sorry I thought I replied earlier.  There were no major changes made at that time.  The data flowing inbound had made a drastic change, breaking the parsing expressions at that time. I found initial... See more...
Sorry I thought I replied earlier.  There were no major changes made at that time.  The data flowing inbound had made a drastic change, breaking the parsing expressions at that time. I found initially just using built-in json parsing wasn't working properly, but after massaging the data by dropping some leading characters in the data stream, that worked alot better now.  I don't have the particulars to provide at the moment, but this data is parsable without the need to manually specify regex expressions for each field, or create custom field extractions. Thanks for your message!
An what about the other one... My question is why remote search doesn't work splunk search "index="some remote index on splunk cloud" | head 10" I'm getting the following error: ERROR: Unknown er... See more...
An what about the other one... My question is why remote search doesn't work splunk search "index="some remote index on splunk cloud" | head 10" I'm getting the following error: ERROR: Unknown error for indexer: <splunk cloud>. Search results may be incomplete. If this occurs frequently , check on the peer.  
Hi @krutika_ag  Users are required to have admin_all_objects capabilities and power role to upload pictures/images to Dashboard Studio, which I admin is a bit frustrating for users, as you shouldnt ... See more...
Hi @krutika_ag  Users are required to have admin_all_objects capabilities and power role to upload pictures/images to Dashboard Studio, which I admin is a bit frustrating for users, as you shouldnt be giving this capability out to non-admins! For more info please see https://splunk.my.site.com/customer/s/article/Capability-Required-to-Add-Images-to-Dashboard-Studio  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@michael_vi wrote: I'm trying to run a search via CLI from federated Splunk instance > Splunk cloud. Everything is configured correctly and I have access to all indexes that on Splunk Cloud from... See more...
@michael_vi wrote: I'm trying to run a search via CLI from federated Splunk instance > Splunk cloud. Everything is configured correctly and I have access to all indexes that on Splunk Cloud from Federated Instance  via web interface But when I'm trying to check connection via CLI on Federated Search instance splunk display app -uri https://<splunk cloud uri>:8089 I get this error:  argument uri is not supported by this handler splunk That's because "-uri" is not an option to the display command.  Run splunk help display app to see the full syntax.
Thanks for that bit! this is the rest of what I have come up with: index=index sourcetype=sourcetype log_type=type host=host | stats count | eval Logs=case(count>0, "Green", count=0, "Red") | ev... See more...
Thanks for that bit! this is the rest of what I have come up with: index=index sourcetype=sourcetype log_type=type host=host | stats count | eval Logs=case(count>0, "Green", count=0, "Red") | eval pulse="pulse" | fillnull logs | fillnull value=green Logs | table Logs pulse This will be in a "studio dashboard"  |