All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sorry, I was giving an example as I didnt have any users searching over 30 days. Please see below, I have added a table command too. index=_audit action="search" info=completed | eval et=COALESCE(s... See more...
Sorry, I was giving an example as I didnt have any users searching over 30 days. Please see below, I have added a table command too. index=_audit action="search" info=completed | eval et=COALESCE(search_et, api_et), lt=COALESCE(search_lt, api_lt) | eval timespanSecs=lt-et | where timespanSecs>(60*60*24*30) | eval friendlyTime=tostring(timespanSecs, "duration") | table user search friendlyTime Is this what you are after? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Sorry. The first time I sent it it said it was reported as spam so wasnt sure it went through
This is not what i am looking for -  i am trying to get a list of splunk users and searches that are searching back 30 DAYS or longer. 
A segmentation fault (signal 11) in Splunk can have several potential causes, including memory corruption, insufficient resources, software bugs, or issues with the system configuration. Since you me... See more...
A segmentation fault (signal 11) in Splunk can have several potential causes, including memory corruption, insufficient resources, software bugs, or issues with the system configuration. Since you mentioned that you haven't changed anything recently, it's crucial to systematically investigate and rule out potential causes. Steps to Troubleshoot and Diagnose the Issue: 1. Check Splunk Logs for More Clues Look at the splunkd.log and crash.log files for additional context around the crash. These logs can be found in:   $SPLUNK_HOME/var/log/splunk/splunkd.log $SPLUNK_HOME/var/log/splunk/crash*.log   Run:   grep -i 'fatal' $SPLUNK_HOME/var/log/splunk/splunkd.log grep -i 'segfault' $SPLUNK_HOME/var/log/splunk/crash*.log   This might provide more context on what was happening before the crash.   2. Validate System Memory and Kernel Overcommit Settings Your crash log shows "No memory mapped at address", which suggests possible memory issues. Check for kernel memory overcommitting, which can lead to random segmentation faults. Run:   cat /proc/sys/vm/overcommit_memory   If the value is 0, memory overcommit handling is heuristic-based. If 1, the system allows overcommitting memory, which is not recommended for Splunk. If 2, it's strict (recommended). If it's set to 1, you could consider changing it to see if that effects the Splunk service:   echo 2 | sudo tee /proc/sys/vm/overcommit_memory   And persist the change in /etc/sysctl.conf:   vm.overcommit_memory = 2   Also see https://splunk.my.site.com/customer/s/article/Indexer-crashed-after-OS-upgrade   3. Check Transparent Huge Pages (THP) THP can cause issues with Splunk's memory management. Disable it if it’s enabled. Check the current status:   cat /sys/kernel/mm/transparent_hugepage/enabled   If it says [always], disable it temporarily:   echo never | sudo tee /sys/kernel/mm/transparent_hugepage/enabled echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag   To make it permanent, add the following to /etc/rc.local:   echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/defrag   4. Check ulimits for Splunk User If the indexer is running into resource exhaustion, check its ulimits:   ulimit -a   Try updating to the following if not already:   nofile = 65536 nproc = 16384   Adjust in /etc/security/limits.conf:   splunk soft nofile 65536 splunk hard nofile 65536 splunk soft nproc 16384 splunk hard nproc 16384     5. Check Splunk’s Memory and CPU Usage Run:   ps aux --sort=-%mem | grep splunk free -m top -o %CPU   Look for excessive memory or CPU consumption.   6. Check for Recent Software Updates or Kernel Changes If the system has undergone automatic updates, it might have introduced compatibility issues. Check recent updates:   cat /var/log/dpkg.log | grep -i "upgrade" # Debian/Ubuntu cat /var/log/yum.log | grep -i "update" # RHEL/CentOS     Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @boknows , please don't duplicate questions! See my answer to your other question: https://community.splunk.com/t5/Splunk-Search/Host-override-with-event-data/m-p/712235#M240309 Ciao. Giuseppe
Hi @boknows , the transforms.conf isn't correct: you aren't performing a field extraction, so please try: [changehost] DEST_KEY = MetaData:Host REGEX = ^([^\s]+) FORMAT = host::$1 Then, where did ... See more...
Hi @boknows , the transforms.conf isn't correct: you aren't performing a field extraction, so please try: [changehost] DEST_KEY = MetaData:Host REGEX = ^([^\s]+) FORMAT = host::$1 Then, where did you locate them? they must be located in the first full Splunk instance they pass through, in  other words in the first heavy Forwarder or, if not present any HF, in the Indexers. Ciao. Giuseppe
I am trying to get a list of splunk users and searches that are searching back 30 DAYS or longer. 
1. Are my instances independent of each other? A: Yes - these are independent of each other. 2. Can I have anything setup so that the dashboard is visible in both instances? A: Im not aware of a... See more...
1. Are my instances independent of each other? A: Yes - these are independent of each other. 2. Can I have anything setup so that the dashboard is visible in both instances? A: Im not aware of anything you can do out of the box to automate the sync between the two instances, the only thing you could do is write some custom code to use the Splunk REST API to sync dashboards between them. I would recommend having your dashboards in an app in source control (e.g. Gitlab/Github) and then deploy to both instances to give you the same dashboards on both. 3. following on this 'You can also setup federated search between different instances so they can search the same data.', how can I do it? A: This is a good starting point (https://www.splunk.com/en_us/blog/platform/introducing-splunk-federated-search.html) which covers the basics of Fed. Search. This wont allow you to sync dashboards, but should give you access to data on either Splunk Cloud stack. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @scout29  Try the following query, this takes the earliest time from the latest time to give the number of seconds searches across, then filter where >1800 (30 mins) index=_audit action="search"... See more...
Hi @scout29  Try the following query, this takes the earliest time from the latest time to give the number of seconds searches across, then filter where >1800 (30 mins) index=_audit action="search" info=completed | eval et=COALESCE(search_et, api_et), lt=COALESCE(search_lt, api_lt) | eval timespanSecs=lt-et | where timespanSecs>1800 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @splunklearner , you can create apps and give access to some Roles to each one. But assigning apps to Roles you solve only part of the problem, because you have also to assign data access to the... See more...
Hi @splunklearner , you can create apps and give access to some Roles to each one. But assigning apps to Roles you solve only part of the problem, because you have also to assign data access to the Roles and you can do this in [Settings > Roles]. At least, you have to associate your groups to Roles: Roles drive the access policies, Users are associated to access grants throgh Roles. Ciao. Giuseppe  
I am trying to create a search that shows me all users that are searching back 30 days or longer in Splunk. For example, if user123 ran a search of "index = windows" and selected a time picker val... See more...
I am trying to create a search that shows me all users that are searching back 30 days or longer in Splunk. For example, if user123 ran a search of "index = windows" and selected a time picker value of 30 days, i would want the search results to show me the username, the search used, and the time picker used.  I would want to see this for any users searching back 30 days or more in splunk. This is what i have started with to use as a base search, but i am not finding any fields that show a time picker value: index=_audit  action="search" 
Hello @livehybrid , yes, data are still getting in the original index and they contain "TheAppResourceGroupName"  
Hi @vksplunk1  Outputlookup is already categorised as a risky command in terms of protection against SPL in links clicked, or in dashboard ("In the Search app, the warning dialog box appears when yo... See more...
Hi @vksplunk1  Outputlookup is already categorised as a risky command in terms of protection against SPL in links clicked, or in dashboard ("In the Search app, the warning dialog box appears when you click a link or type a URL that loads a search which contains risky commands. In dashboards, the warning dialog box appears automatically unless an input or visualization contains a search with a risky command") however it is not currently possible to display the alert if a user just types it out themselves into the search bar. Check out https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/SPLsafeguards for more information about this.   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi  -  Is there a way to Warning the user when try to execute outoutlook up command from front end to avoid deleting accidental records from kvstore.   Thank you
Hi @KKuser , at first, don't attach a new question to a so old one (nine years ago!) even if on the same topic because it's difficoult to have an answer, it's always better to create a new question.... See more...
Hi @KKuser , at first, don't attach a new question to a so old one (nine years ago!) even if on the same topic because it's difficoult to have an answer, it's always better to create a new question. Anyway, if you need information about a validated Splunk architecture for an on premise or hybrid installation  see at https://docs.splunk.com/Documentation/SVA/current/Architectures/About Anyway, in Splunk Cloud you only see two machines: one Search Head for ES and one Search Head for the other apps. You don't know if there's a Search Head Cluster, probably not also because you see only two machines and SH Cluster need three machines. In addition you can upload apps and this operation isn't possible on SH Clusters. In addition, the Indexer layer ss not visible for you even if you see three Indexers and you cannot see the Cluster Manager.  Surely there are many instance of Splunk Cloud in different AWS machines. For more information see at https://docs.splunk.com/Documentation/SVA/current/Architectures/SCPExperience Ciao. Giuseppe
Hi @Nicolas2203  Are you still seeing data containing "TheAppResourceGroupName" in the original index name?  
Hello, My use case : Context : On azure, datas from several applications are pushed in a Azure EventHub I need to separate the datas from one application, and put this datas into a new index on Sp... See more...
Hello, My use case : Context : On azure, datas from several applications are pushed in a Azure EventHub I need to separate the datas from one application, and put this datas into a new index on Splunk On Azure, all the resources of this app are in one Ressource Group : TheAppResourceGroupName I used a Heavy Forwarder, and this are my configs : props.conf : [source::eventhub://EVENTHUBAZURE.servicebus.windows.net/app-logs;] TRANSFORMS-route =  routeToNewIndex, discard_original, transforms.conf [routeToNewIndex] REGEX = TheAppResourceGroupName DEST_KEY = _MetaData:Index FORMAT = NewIndex [discard_original] REGEX = TheAppResourceGroupName DEST_KEY = queue FORMAT = nullQueue This config will delete the datas, yes, but in the NewIndex, and not in the original Index, after the routing. I didn't find an answer witch fit with my needs on the commu and the docs, but maybe someone has to face a similar need . Thanks a lot for the help! Nico
Hi @SN1  Did you use a user-seed.conf when setting up the new SH? And/Or did you create an admin user that isnt called "admin"? It sounds like your admin user does not exist, but there are searches... See more...
Hi @SN1  Did you use a user-seed.conf when setting up the new SH? And/Or did you create an admin user that isnt called "admin"? It sounds like your admin user does not exist, but there are searches owned by the "admin" user. To resolve change "owner = admin" and update to a user which exists (ideally a service user) in the default (or local).meta files in all apps on your SH ($SPLUNK_HOME/etc/apps/<appName>/metadata/*.meta Also check out https://community.splunk.com/t5/Reporting/Why-am-I-getting-error-quot-DispatchManager-The-user-admin-quot/m-p/196168 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I see an architecture online for Splunk Cloud. The Splunk Cloud has Search Tier[Search Head(core), Search Head(Enterprise Security)], Indexing Tier(I see 3 indexers picture), Management Tier[Cluster ... See more...
I see an architecture online for Splunk Cloud. The Splunk Cloud has Search Tier[Search Head(core), Search Head(Enterprise Security)], Indexing Tier(I see 3 indexers picture), Management Tier[Cluster Manager]. Is this a valid Splunk Cloud architecture? If at all there is a search head cluster, will it be mentioned here in the architecture diagram? I'm trying to figure out if there are multiple instances of Splunk Cloud, can I know if knowledge objects present in 1 instance can be seen in other instance as well.
I'm operating Splunk cloud. and the addresses are something like abc1.splunkcloud.com and abc2.splunkcloud.com. I'm trying to get a dashboard in search and reporting app to be visible in both instan... See more...
I'm operating Splunk cloud. and the addresses are something like abc1.splunkcloud.com and abc2.splunkcloud.com. I'm trying to get a dashboard in search and reporting app to be visible in both instances.  1. Are my instances independent of each other? 2. Can I have anything setup so that the dashboard is visible in both instances? 3. following on this 'You can also setup federated search between different instances so they can search the same data.', how can I do it?