All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @boknows , please don't duplicate questions! See my answer to your other question: https://community.splunk.com/t5/Splunk-Search/Host-override-with-event-data/m-p/712235#M240309 Ciao. Giuseppe
Hi @boknows , the transforms.conf isn't correct: you aren't performing a field extraction, so please try: [changehost] DEST_KEY = MetaData:Host REGEX = ^([^\s]+) FORMAT = host::$1 Then, where did ... See more...
Hi @boknows , the transforms.conf isn't correct: you aren't performing a field extraction, so please try: [changehost] DEST_KEY = MetaData:Host REGEX = ^([^\s]+) FORMAT = host::$1 Then, where did you locate them? they must be located in the first full Splunk instance they pass through, in  other words in the first heavy Forwarder or, if not present any HF, in the Indexers. Ciao. Giuseppe
I am trying to get a list of splunk users and searches that are searching back 30 DAYS or longer. 
1. Are my instances independent of each other? A: Yes - these are independent of each other. 2. Can I have anything setup so that the dashboard is visible in both instances? A: Im not aware of a... See more...
1. Are my instances independent of each other? A: Yes - these are independent of each other. 2. Can I have anything setup so that the dashboard is visible in both instances? A: Im not aware of anything you can do out of the box to automate the sync between the two instances, the only thing you could do is write some custom code to use the Splunk REST API to sync dashboards between them. I would recommend having your dashboards in an app in source control (e.g. Gitlab/Github) and then deploy to both instances to give you the same dashboards on both. 3. following on this 'You can also setup federated search between different instances so they can search the same data.', how can I do it? A: This is a good starting point (https://www.splunk.com/en_us/blog/platform/introducing-splunk-federated-search.html) which covers the basics of Fed. Search. This wont allow you to sync dashboards, but should give you access to data on either Splunk Cloud stack. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @scout29  Try the following query, this takes the earliest time from the latest time to give the number of seconds searches across, then filter where >1800 (30 mins) index=_audit action="search"... See more...
Hi @scout29  Try the following query, this takes the earliest time from the latest time to give the number of seconds searches across, then filter where >1800 (30 mins) index=_audit action="search" info=completed | eval et=COALESCE(search_et, api_et), lt=COALESCE(search_lt, api_lt) | eval timespanSecs=lt-et | where timespanSecs>1800 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @splunklearner , you can create apps and give access to some Roles to each one. But assigning apps to Roles you solve only part of the problem, because you have also to assign data access to the... See more...
Hi @splunklearner , you can create apps and give access to some Roles to each one. But assigning apps to Roles you solve only part of the problem, because you have also to assign data access to the Roles and you can do this in [Settings > Roles]. At least, you have to associate your groups to Roles: Roles drive the access policies, Users are associated to access grants throgh Roles. Ciao. Giuseppe  
I am trying to create a search that shows me all users that are searching back 30 days or longer in Splunk. For example, if user123 ran a search of "index = windows" and selected a time picker val... See more...
I am trying to create a search that shows me all users that are searching back 30 days or longer in Splunk. For example, if user123 ran a search of "index = windows" and selected a time picker value of 30 days, i would want the search results to show me the username, the search used, and the time picker used.  I would want to see this for any users searching back 30 days or more in splunk. This is what i have started with to use as a base search, but i am not finding any fields that show a time picker value: index=_audit  action="search" 
Hello @livehybrid , yes, data are still getting in the original index and they contain "TheAppResourceGroupName"  
Hi @vksplunk1  Outputlookup is already categorised as a risky command in terms of protection against SPL in links clicked, or in dashboard ("In the Search app, the warning dialog box appears when yo... See more...
Hi @vksplunk1  Outputlookup is already categorised as a risky command in terms of protection against SPL in links clicked, or in dashboard ("In the Search app, the warning dialog box appears when you click a link or type a URL that loads a search which contains risky commands. In dashboards, the warning dialog box appears automatically unless an input or visualization contains a search with a risky command") however it is not currently possible to display the alert if a user just types it out themselves into the search bar. Check out https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/SPLsafeguards for more information about this.   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi  -  Is there a way to Warning the user when try to execute outoutlook up command from front end to avoid deleting accidental records from kvstore.   Thank you
Hi @KKuser , at first, don't attach a new question to a so old one (nine years ago!) even if on the same topic because it's difficoult to have an answer, it's always better to create a new question.... See more...
Hi @KKuser , at first, don't attach a new question to a so old one (nine years ago!) even if on the same topic because it's difficoult to have an answer, it's always better to create a new question. Anyway, if you need information about a validated Splunk architecture for an on premise or hybrid installation  see at https://docs.splunk.com/Documentation/SVA/current/Architectures/About Anyway, in Splunk Cloud you only see two machines: one Search Head for ES and one Search Head for the other apps. You don't know if there's a Search Head Cluster, probably not also because you see only two machines and SH Cluster need three machines. In addition you can upload apps and this operation isn't possible on SH Clusters. In addition, the Indexer layer ss not visible for you even if you see three Indexers and you cannot see the Cluster Manager.  Surely there are many instance of Splunk Cloud in different AWS machines. For more information see at https://docs.splunk.com/Documentation/SVA/current/Architectures/SCPExperience Ciao. Giuseppe
Hi @Nicolas2203  Are you still seeing data containing "TheAppResourceGroupName" in the original index name?  
Hello, My use case : Context : On azure, datas from several applications are pushed in a Azure EventHub I need to separate the datas from one application, and put this datas into a new index on Sp... See more...
Hello, My use case : Context : On azure, datas from several applications are pushed in a Azure EventHub I need to separate the datas from one application, and put this datas into a new index on Splunk On Azure, all the resources of this app are in one Ressource Group : TheAppResourceGroupName I used a Heavy Forwarder, and this are my configs : props.conf : [source::eventhub://EVENTHUBAZURE.servicebus.windows.net/app-logs;] TRANSFORMS-route =  routeToNewIndex, discard_original, transforms.conf [routeToNewIndex] REGEX = TheAppResourceGroupName DEST_KEY = _MetaData:Index FORMAT = NewIndex [discard_original] REGEX = TheAppResourceGroupName DEST_KEY = queue FORMAT = nullQueue This config will delete the datas, yes, but in the NewIndex, and not in the original Index, after the routing. I didn't find an answer witch fit with my needs on the commu and the docs, but maybe someone has to face a similar need . Thanks a lot for the help! Nico
Hi @SN1  Did you use a user-seed.conf when setting up the new SH? And/Or did you create an admin user that isnt called "admin"? It sounds like your admin user does not exist, but there are searches... See more...
Hi @SN1  Did you use a user-seed.conf when setting up the new SH? And/Or did you create an admin user that isnt called "admin"? It sounds like your admin user does not exist, but there are searches owned by the "admin" user. To resolve change "owner = admin" and update to a user which exists (ideally a service user) in the default (or local).meta files in all apps on your SH ($SPLUNK_HOME/etc/apps/<appName>/metadata/*.meta Also check out https://community.splunk.com/t5/Reporting/Why-am-I-getting-error-quot-DispatchManager-The-user-admin-quot/m-p/196168 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I see an architecture online for Splunk Cloud. The Splunk Cloud has Search Tier[Search Head(core), Search Head(Enterprise Security)], Indexing Tier(I see 3 indexers picture), Management Tier[Cluster ... See more...
I see an architecture online for Splunk Cloud. The Splunk Cloud has Search Tier[Search Head(core), Search Head(Enterprise Security)], Indexing Tier(I see 3 indexers picture), Management Tier[Cluster Manager]. Is this a valid Splunk Cloud architecture? If at all there is a search head cluster, will it be mentioned here in the architecture diagram? I'm trying to figure out if there are multiple instances of Splunk Cloud, can I know if knowledge objects present in 1 instance can be seen in other instance as well.
I'm operating Splunk cloud. and the addresses are something like abc1.splunkcloud.com and abc2.splunkcloud.com. I'm trying to get a dashboard in search and reporting app to be visible in both instan... See more...
I'm operating Splunk cloud. and the addresses are something like abc1.splunkcloud.com and abc2.splunkcloud.com. I'm trying to get a dashboard in search and reporting app to be visible in both instances.  1. Are my instances independent of each other? 2. Can I have anything setup so that the dashboard is visible in both instances? 3. following on this 'You can also setup federated search between different instances so they can search the same data.', how can I do it?
Created 4 panels for waf_logs as below: Base Search - Index=a sourcetype=xxx:xxxx |fields * |fillnull value = "NULL" Panel - 1 |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_lo... See more...
Created 4 panels for waf_logs as below: Base Search - Index=a sourcetype=xxx:xxxx |fields * |fillnull value = "NULL" Panel - 1 |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.rule_logs{}.rule_group |rename waf_log.rule_logs{}.rule_group as "Rule Group" |sort - count Panel 2 -  |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.rule_logs{}.rule_id |rename waf_log.rule_logs{}.rule_id as "Rule ID" |sort - count Panel 3 -  |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.status |rename waf_log.status as "Log Status" |sort - count Panel 4 -  |search client_ip= "$cli_ip$" uri_query = "$uri_que$" waf_log.rule_logs{}.rule_id="$rule_id$" waf_log.rule_logs{}.rule_name="$rule_name$" waf_log.status="$log_status$" waf_log.rule_logs{}.msg="$log_mess$" |stats count by waf_log.rule_logs{}.msg |rename waf_log.rule_logs{}.msg as "Log Message" |sort - count Any suggestions on these dashboard to make it more readable when they click on any of the value?
I see, sorry - I dont think it is possible to achieve what you are looking for without removing the fields you dont want to see from the source data. Please let me know how you get on and consider a... See more...
I see, sorry - I dont think it is possible to achieve what you are looking for without removing the fields you dont want to see from the source data. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I don't understand why, but removing everything from the WebUI and manually configuring the script in inputs.conf it works, data flows into the index like a charm.
Hi @ITWhisperer , We want the log events to be in such a manner which is useful for our app owners.  For suppose in my sample log... avg_ingress_latency_fe: 0    cacheable: true    client_dest_po... See more...
Hi @ITWhisperer , We want the log events to be in such a manner which is useful for our app owners.  For suppose in my sample log... avg_ingress_latency_fe: 0    cacheable: true    client_dest_port: 443    client_insights: These strings which are beginning are not at all useful (but can't be removed) but waf_log which is at the bottom is more important and want this in the beginning.  @livehybrid  @ITWhisperer Yes I achieved it by creating dashboard, but even after they click on any dashboard panel, they will be seeing the same less imp strings (the same event format) which is not supposed to be.