All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  -  Is there a way to Warning the user when try to execute outoutlook up command from front end to avoid deleting accidental records from kvstore.   Thank you
Hi @KKuser , at first, don't attach a new question to a so old one (nine years ago!) even if on the same topic because it's difficoult to have an answer, it's always better to create a new question.... See more...
Hi @KKuser , at first, don't attach a new question to a so old one (nine years ago!) even if on the same topic because it's difficoult to have an answer, it's always better to create a new question. Anyway, if you need information about a validated Splunk architecture for an on premise or hybrid installation  see at https://docs.splunk.com/Documentation/SVA/current/Architectures/About Anyway, in Splunk Cloud you only see two machines: one Search Head for ES and one Search Head for the other apps. You don't know if there's a Search Head Cluster, probably not also because you see only two machines and SH Cluster need three machines. In addition you can upload apps and this operation isn't possible on SH Clusters. In addition, the Indexer layer ss not visible for you even if you see three Indexers and you cannot see the Cluster Manager.  Surely there are many instance of Splunk Cloud in different AWS machines. For more information see at https://docs.splunk.com/Documentation/SVA/current/Architectures/SCPExperience Ciao. Giuseppe
Hi @Nicolas2203  Are you still seeing data containing "TheAppResourceGroupName" in the original index name?  
Hello, My use case : Context : On azure, datas from several applications are pushed in a Azure EventHub I need to separate the datas from one application, and put this datas into a new index on Sp... See more...
Hello, My use case : Context : On azure, datas from several applications are pushed in a Azure EventHub I need to separate the datas from one application, and put this datas into a new index on Splunk On Azure, all the resources of this app are in one Ressource Group : TheAppResourceGroupName I used a Heavy Forwarder, and this are my configs : props.conf : [source::eventhub://EVENTHUBAZURE.servicebus.windows.net/app-logs;] TRANSFORMS-route =  routeToNewIndex, discard_original, transforms.conf [routeToNewIndex] REGEX = TheAppResourceGroupName DEST_KEY = _MetaData:Index FORMAT = NewIndex [discard_original] REGEX = TheAppResourceGroupName DEST_KEY = queue FORMAT = nullQueue This config will delete the datas, yes, but in the NewIndex, and not in the original Index, after the routing. I didn't find an answer witch fit with my needs on the commu and the docs, but maybe someone has to face a similar need . Thanks a lot for the help! Nico
Hi @SN1  Did you use a user-seed.conf when setting up the new SH? And/Or did you create an admin user that isnt called "admin"? It sounds like your admin user does not exist, but there are searches... See more...
Hi @SN1  Did you use a user-seed.conf when setting up the new SH? And/Or did you create an admin user that isnt called "admin"? It sounds like your admin user does not exist, but there are searches owned by the "admin" user. To resolve change "owner = admin" and update to a user which exists (ideally a service user) in the default (or local).meta files in all apps on your SH ($SPLUNK_HOME/etc/apps/<appName>/metadata/*.meta Also check out https://community.splunk.com/t5/Reporting/Why-am-I-getting-error-quot-DispatchManager-The-user-admin-quot/m-p/196168 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I see an architecture online for Splunk Cloud. The Splunk Cloud has Search Tier[Search Head(core), Search Head(Enterprise Security)], Indexing Tier(I see 3 indexers picture), Management Tier[Cluster ... See more...
I see an architecture online for Splunk Cloud. The Splunk Cloud has Search Tier[Search Head(core), Search Head(Enterprise Security)], Indexing Tier(I see 3 indexers picture), Management Tier[Cluster Manager]. Is this a valid Splunk Cloud architecture? If at all there is a search head cluster, will it be mentioned here in the architecture diagram? I'm trying to figure out if there are multiple instances of Splunk Cloud, can I know if knowledge objects present in 1 instance can be seen in other instance as well.
I'm operating Splunk cloud. and the addresses are something like abc1.splunkcloud.com and abc2.splunkcloud.com. I'm trying to get a dashboard in search and reporting app to be visible in both instan... See more...
I'm operating Splunk cloud. and the addresses are something like abc1.splunkcloud.com and abc2.splunkcloud.com. I'm trying to get a dashboard in search and reporting app to be visible in both instances.  1. Are my instances independent of each other? 2. Can I have anything setup so that the dashboard is visible in both instances? 3. following on this 'You can also setup federated search between different instances so they can search the same data.', how can I do it?
Created 4 panels for waf_logs as below: Base Search - Index=a sourcetype=xxx:xxxx |fields * |fillnull value = "NULL" Panel - 1 |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_lo... See more...
Created 4 panels for waf_logs as below: Base Search - Index=a sourcetype=xxx:xxxx |fields * |fillnull value = "NULL" Panel - 1 |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.rule_logs{}.rule_group |rename waf_log.rule_logs{}.rule_group as "Rule Group" |sort - count Panel 2 -  |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.rule_logs{}.rule_id |rename waf_log.rule_logs{}.rule_id as "Rule ID" |sort - count Panel 3 -  |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.status |rename waf_log.status as "Log Status" |sort - count Panel 4 -  |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.rule_logs{}.msg |rename waf_log.rule_logs{}.msg as "Log Message" |sort - count Any suggestions on these dashboard to make it more readable when they click on any of the value?
I see, sorry - I dont think it is possible to achieve what you are looking for without removing the fields you dont want to see from the source data. Please let me know how you get on and consider a... See more...
I see, sorry - I dont think it is possible to achieve what you are looking for without removing the fields you dont want to see from the source data. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I don't understand why, but removing everything from the WebUI and manually configuring the script in inputs.conf it works, data flows into the index like a charm.
Hi @ITWhisperer , We want the log events to be in such a manner which is useful for our app owners.  For suppose in my sample log... avg_ingress_latency_fe: 0    cacheable: true    client_dest_po... See more...
Hi @ITWhisperer , We want the log events to be in such a manner which is useful for our app owners.  For suppose in my sample log... avg_ingress_latency_fe: 0    cacheable: true    client_dest_port: 443    client_insights: These strings which are beginning are not at all useful (but can't be removed) but waf_log which is at the bottom is more important and want this in the beginning.  @livehybrid  @ITWhisperer Yes I achieved it by creating dashboard, but even after they click on any dashboard panel, they will be seeing the same less imp strings (the same event format) which is not supposed to be.
Hi @Karthikeya  The reason waf_logs is at the bottom is because JSON events are output in alphabetical order when viewed as a JSON formatted event, and it isnt expanded because it is a child to the ... See more...
Hi @Karthikeya  The reason waf_logs is at the bottom is because JSON events are output in alphabetical order when viewed as a JSON formatted event, and it isnt expanded because it is a child to the main event. These are things which cannot be changed when viewing it in this way, however you could create dashboards perhaps to display the data in a table or something like that if this is preferred? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
hi @_joe  Further to my previous reply, Ive found that the app is also on GitHub (https://github.com/jorritfolmer/TA-ct-log) There are also contact on the user's GitHub profile page (https://github... See more...
hi @_joe  Further to my previous reply, Ive found that the app is also on GitHub (https://github.com/jorritfolmer/TA-ct-log) There are also contact on the user's GitHub profile page (https://github.com/jorritfolmer) although I wont post them directly here, you can see them on that link if you wanted to try and contact? Failing that, do you have resource available to work on the archived app to make it Python3 compatible?  Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
You could leave it that way, but you're maintaining 200 connections to the downstream receivers. If you have, for example, 16 cores on your intermediate forwarder and want to leave 2 cores free for o... See more...
You could leave it that way, but you're maintaining 200 connections to the downstream receivers. If you have, for example, 16 cores on your intermediate forwarder and want to leave 2 cores free for other activity (so much overhead!), you can do the same thing with larger queues and fewer pipelines by increasing maxSize values by the same relative factor. If your forwarder doesn't have enough memory to hold all queues, keep an eye on memory, paging, and disk queue metrics.
Hi @_joe  The app is achieved because it hasnt been updated for over 4 years. It is a community app built by (Jorrit Folmer) @jorritf - so with a bit of luck they might see this and be able to respo... See more...
Hi @_joe  The app is achieved because it hasnt been updated for over 4 years. It is a community app built by (Jorrit Folmer) @jorritf - so with a bit of luck they might see this and be able to respond!  Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
What do you hope to achieve which can't be done in SPL and your dashboard searches?
We have different indexes and different roles created for different users. Now my question is can I create app and give access to specific group users? How to do that   1 application --- different ... See more...
We have different indexes and different roles created for different users. Now my question is can I create app and give access to specific group users? How to do that   1 application --- different index -- restricted to 1 app team 2 application -- different index -- restricted to 2 app team now 1 and 2 apps belong to ABC group. Want to ABC as app and 1 and 2 app teams should have access to only ABC group and access their assigned 1 app logs or 2 app logs.
Hello, Does anyone know if there are any plans for this app to become compatible with recent versions of Splunk? It claims to be compatible with 9.4 but it is running python 2...   
I appreciate the reply, but this is why I am asking the question I cannot find any information about a timeout in the documentation for this. If there is no timeout that is fine, just want to know.
Hi @splunklearner , access grants, in Splunk, are managed at index level, have you all these data in different indexes or all in the same index? if in different indexes, you can enable each group o... See more...
Hi @splunklearner , access grants, in Splunk, are managed at index level, have you all these data in different indexes or all in the same index? if in different indexes, you can enable each group of users (identified by a proper role) to access one index, then you can also use the same app, but users can see only the indexes enabled for them. In [Settings > Roles > Indexes] you can define for each role the enabled indexes. If they are in the same index is more difficoult: you could try to create a rule, at role level, to access only events that match a rule (e.g. applications from 1 to 10), but it's more difficoult to manage the exceptions. In [Settings > Roles > Restrictions] you can define the filters for that role.  Ciao. Giuseppe