All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I am getting below error while configuring the SSL certificate among the splunk hosts. ERROR DistributedTracer [2406 MainThread] - Couldn't find "distributed_tracer" in server.conf. ... See more...
Hi Splunkers, I am getting below error while configuring the SSL certificate among the splunk hosts. ERROR DistributedTracer [2406 MainThread] - Couldn't find "distributed_tracer" in server.conf. Can you help me on this?    
I have a json like this:   { "A": [ { "B": [ { "status": "2", "value": "1" }, { "status": "1", "value": "2" }, ... See more...
I have a json like this:   { "A": [ { "B": [ { "status": "2", "value": "1" }, { "status": "1", "value": "2" }, { "status": "3", "value": "4" }, { "status": "5", "value": "8" } ] } ] }   I want to extract the field  value. I tried doing   spath input=field_name output=value path=A{0}.B{}.value   but it's not working Pls help
hi   I want to display an average line in my bar chart So I am doing this but instad a line it's a third bar chart which is displayed how to add a line average instead a bar line   | timechart ... See more...
hi   I want to display an average line in my bar chart So I am doing this but instad a line it's a third bar chart which is displayed how to add a line average instead a bar line   | timechart span=2h max(debut) as "Début de session", max(fin) as "Fin de session" | eventstats avg("Début de session") as Average | eval Average=round(Average,0)  Thanks
Why does Splunk Cloud have search head as standalone and not in a search head cluster ? How does Splunk engg team manages maintenance and upgrade task in this scenario ?
I am connected inside a WAN network that has WiFi APN, LAN cables and all work in same network. Network has router and then just systems, no external firewall etc that can give direct logs of data fl... See more...
I am connected inside a WAN network that has WiFi APN, LAN cables and all work in same network. Network has router and then just systems, no external firewall etc that can give direct logs of data flow in the network. From where I can get logs to work in splunk so that I can continuously monitor in real time? to visualize things like internet speed, usage of data per host etc.
Hi Team,   i am doing set of poc to expolre splunk features, while doing so i am able to send data to splunk observability using open telemetry (OTEL) and just want to know using OTEL can we send d... See more...
Hi Team,   i am doing set of poc to expolre splunk features, while doing so i am able to send data to splunk observability using open telemetry (OTEL) and just want to know using OTEL can we send data to Enterprise splunk.   If yes can you please guide me how..??    
Hi Splunk Gurus Im hoping that there is a simple answer for this issue. We have recently upgraded to Splunk Enterprise 8.2. Our servers (RHEL 7/8) are all running Universal Forwarders 8.0. The is... See more...
Hi Splunk Gurus Im hoping that there is a simple answer for this issue. We have recently upgraded to Splunk Enterprise 8.2. Our servers (RHEL 7/8) are all running Universal Forwarders 8.0. The issue we have found is that the UF does not include the Python 2.7/3.7 binaries and libs as part of its install package (yes I know this has not been the case for a long time). This is not an issue if you are installing the forwarder on a Splunk Node as the Enterprise version includes these and installs them (as far as I can tell) into the correct locations in the forwarder for it to use internally. The problem appears when trying to upgrade the standalone linux package (.tgz or .rpm) to 8.2.2.1 as the binary and packages for python3.7 are required (regardless of python.version setting)  to run the migration upgrade scripts As RHEL7/8 only has a supported package for Python 3.6 this becomes an even more pressing issue. I have installed Python 3.7 from source to try as a workaround and linked it to /opt/splunkforwarder/bin/python3.7 with some success. The main problem seems to be that the site-packages path seems to be hard coded into the forwarder to look for packages in the /opt/splunkforwarder/lib/python3.7/site-packages regardless of the python lib path locations. eg if I symlink /usr/local/bin/python3.7  -> /opt/splunk/forwarder/bin/python3.7 I get these kinds of errors in the splunkd.log /opt/splunkforwarder/bin/python3.7: can't open file '/opt/splunkforwarder/lib/python3.7/site-packages/splunk/clilib/cli.py': [Errno 2] No such file or directory As the splunk cmd which runs python scripts from apps cannot even start correctly regardless of the python.version value set in the app or server.conf So my actual question is how do we get the python 2.7 & 3.7 binaries and associated required packages into a forwarder? Is there a .tgz or .rpm that we can use to get the internal python versions the forwarder requires installed in the right locations? Or a full forwarder .rpm that includes the binaries for exactly this standalone purpose? This would seem to be a significant oversight that assumes Splunk Enterpise will always be available to use as a base installer for all servers, and additionally that python 3.7 is always available/easily installed. A much less desirable option would be to roll back the forwarders (and all deployed apps to the latest 7.x version) but this limits moving forward and will vreate many more compatibility issues than it will solve Any helpful hints pointers or advice would be greatly appreciated Regards Kieren
Hi all, Currently have setup multiple Splunk servers configured in outputs.conf for the universal forwarders but I am wondering if there is a way to specify only index to the second server if the fi... See more...
Hi all, Currently have setup multiple Splunk servers configured in outputs.conf for the universal forwarders but I am wondering if there is a way to specify only index to the second server if the first server becomes unreachable.   Thanks,
Hello,  I am streaming a list of data with the most recent timestamp, but the data is getting displayed in a different time.  For example: t=1632967410.582567 devicename=abc Ethernet.dst=### Ethern... See more...
Hello,  I am streaming a list of data with the most recent timestamp, but the data is getting displayed in a different time.  For example: t=1632967410.582567 devicename=abc Ethernet.dst=### Ethernet.src=### Ethernet.type=65535 t=1632967410.582567 devicename=abc Ethernet.dst=### Ethernet.src=### Ethernet.type=65535 t=1632967410.582567 devicename=abc Ethernet.dst=### Ethernet.src=### Ethernet.type=65535 The Epoch conversion of the above timestamp (t=1632967410.582567 ) is 7:03:30.582 PM but the data on the dashboard is displayed at time 5:19:01.000 PM  Background: * Data is generated from a python script, the data is a list of events, and each event is printed to stdout * I have tried to include additional line breaks between each event, but it still streams it as a single chunk and displays it in a different timestamp * The version of SUF is 8.2.1 (build - ddff1c41e5cf)  * The version of Splunk Enterprise is 8.1.2 Can someone guide me on fixing this to print the streamed data in the correct timestamp?  Thank you.
Hi all, I'm new to this forum and found quite a few ideas and solutions to issues admins hit. The organisation I work for are standing up a new site and requested new pair of heavy forwarders to be... See more...
Hi all, I'm new to this forum and found quite a few ideas and solutions to issues admins hit. The organisation I work for are standing up a new site and requested new pair of heavy forwarders to be installed. The issue we have been mulling over is how to provide a highly available forwarder cluster at this site.  The forwarders will be based on Linux, will process data from the network (Syslog, netflows etc) and also process files located on a NFS share (service provider managed CIFS/NFS share). We are using Splunk Cloud but have a deployment server on-prem to manage forwarders on the internal networks. My question - is there a solution to provide a clustered pair of forwarders that act in an active/passive cluster that allows support for processing files and also accepting network traffic? cheers aiders
Is it possible to specify how far back data is retrieved for Azure AD logs? Seems to be it grabbed a bit over 3 months worth.
I configured Splunk's latest Box TA and I am receiving "Connection reset by peer" any suggestions on what the issue could be? Following is a snippet from splunkd.log   09-29-2021 18:56:48.754 -0400... See more...
I configured Splunk's latest Box TA and I am receiving "Connection reset by peer" any suggestions on what the issue could be? Following is a snippet from splunkd.log   09-29-2021 18:56:48.754 -0400 ERROR ExecProcessor - message from "/opt/splunk/splunk/bin/python3.7 /opt/splunk/splunk/etc/apps/Splunk_TA_box/bin/box_service.py" WARNING:boxsdk.network.default_network:Request "GET https://api.box.com/2.0/events?stream_type=admin_logs&limit=500&stream_position=0&created_after=2021-09-26T17:00:00-00:00&created_before=2021-09-27T17:00:00-00:00" failed with ConnectionError exception: ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))   My box.conf for the TA is as follows   ## ## SPDX-FileCopyrightText: 2021 Splunk, Inc. <sales@splunk.com> ## SPDX-License-Identifier: LicenseRef-Splunk-1-2020 ## ## [box_default] folder_fields = type,id,name,size,sequence_id,etag,item_status,permissions,created_at,modified_at,has_collaborations,can_non_owners_invite,tags,created_by,modified_by,parent,path_collection,shared_link collaboration_fields = type,id,created_by,created_at,modified_at,expires_at,status,accessible_by,role,acknowledged_at,item file_fields = type,id,name,owned_by,comment_count,version_number,created_at,modified_at,purged_at,trashed_at,size,content_created_at,content_modified_at,file_version,description,path_collection,shared_link task_fields = type,id,item,due_at,action,message,is_completed,created_by,created_at comment_fields = type,id,is_reply_comment,message,tagged_message,item,modified_at,created_by,created_at user_fields = type,id,name,login,created_at,modified_at,role,timezone,space_amount,space_used,max_upload_size,can_see_managed_users,is_external_collab_restricted,status,job_title,phone,address,avatar_url,is_exempt_from_device_limits,is_exempt_from_login_verification,enterprise,my_tags created_after = collection_interval = 120 priority = 10 record_count = 500 use_thread_pool = 1 url = https://api.box.com restapi_base = https://api.box.com/2.0 disable_ssl_certificate_validation = True
Hi Team  I am trying to extract few report from user agent. like below  OS details  OS version Browser Browser Version Operating System Operating System Version Mobile device    Curre... See more...
Hi Team  I am trying to extract few report from user agent. like below  OS details  OS version Browser Browser Version Operating System Operating System Version Mobile device    Currently i am using eval  ( IF & Case ) to generate report however its very manual process and more time consuming. Please find below command for example  If - val Device =if(match(cs_user_agent, "iPhone"),"iPhone",if(match(cs_user_agent, "Macintosh"),"iPhone",if(match(cs_user_agent, "iPad"),"iPhone",if(match(cs_user_agent, "Android"),"Android",if(match(cs_user_agent, "Win64"),"Windows",if(match(cs_user_agent, "14092"),"Windows",if(match(cs_user_agent, "Windows"),"Windows",if(match(cs_user_agent,"SM-"),"Android",if(match(cs_user_agent,"CPH"),"Android",if(match(cs_user_agent,"Nokia"),"Android",if(match(cs_user_agent,"Pixel"),"Android",if(match(cs_user_agent,"TB-"),"Android",if(match(cs_user_agent,"VFD"),"Android",if(match(cs_user_agent,"HP%20Pro%20Slate"),"Android",if(match(cs_user_agent,"VOG-L09"),"Android",if(match(cs_user_agent,"YAL-L21"),"Android",if(match(cs_user_agent,"ATU-L22"),"Android",if(match(cs_user_agent,"MAR-LX1A"),"Android",if(match(cs_user_agent,"RNE-L22"),"Android",if(match(cs_user_agent,"INE-LX2"),"Android",if(match(cs_user_agent,"AMN-LX2"),"Android",if(match(cs_user_agent,"LYO-LO2"),"Android",if(match(cs_user_agent,"DRA-LX9"),"Android",if(match(cs_user_agent,"LYA-L29"),"Android",if(match(cs_user_agent,"ANE-LX2J"),"Android",if(match(cs_user_agent,"STK-L22"),"Android",if(match(cs_user_agent,"EML-AL00"),"Android",if(match(cs_user_agent,"BLA-L29"),"Android",if(match(cs_user_agent,"X11"),"Linux",if(match(cs_user_agent,"LDN-LX2"),"Android",if(match(cs_user_agent,"TB3-"),"Android",if(match(cs_user_agent,"5033T"),"Android",if(match(cs_user_agent,"5028D"),"Android",if(match(cs_user_agent,"5002X"),"Android",if(match(cs_user_agent,"COR-"),"Android",if(match(cs_user_agent,"MI%20MAX"),"Android",if(match(cs_user_agent,"WAS-LX2"),"Android",if(match(cs_user_agent,"vivo"),"Android",if(match(cs_user_agent,"EML-L29"),"Android",if(match(cs_user_agent,"Moto"),"Android",if(match(cs_user_agent,"MMB"),"Android",if(match(cs_user_agent,"Redmi%20Note%208"),"Android",if(match(cs_user_agent,"M2003J15SC"),"Android",if(match(cs_user_agent,"MI%20MAX"),"Android",if(match(cs_user_agent,"Nexus"),"Android",if(match(cs_user_agent,"ELE-L29"),"Android",if(match(cs_user_agent,"Redmi%20Note%204"),"Android",if(match(cs_user_agent,"rv:89.0"),"Android",if(match(cs_user_agent,"VKY-L09"),"Android",if(match(cs_user_agent,"SmartN11"),"Android",if(match(cs_user_agent,"A330"),"Android",if(match(cs_user_agent,"LM-"),"Android",if(match(cs_user_agent,"G8341"),"Android",if(match(cs_user_agent,"INE-AL00"),"Android",if(match(cs_user_agent,"Mi"),"Android",if(match(cs_user_agent,"CLT"),"Android",if(match(cs_user_agent,"Android"),"Android",if(match(cs_user_agent,"BV9700Pro"),"Android",if(match(cs_user_agent,"5024I"),"Android",if(match(cs_user_agent,"MEIZU"),"Android",if(match(cs_user_agent,"Linux%20X86_64"),"Linux","OTHER"))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) Case - val Brand= case(match(cs_user_agent, "CPH"),"Oppo",match(cs_user_agent, "SM-"),"Samsung",match(cs_user_agent, "VFD"),"Vodafone",match(cs_user_agent, "VFD"),"Vodafone",match(cs_user_agent, "VOG"),"Huawei",match(cs_user_agent, "ELE"),"Huawei",match(cs_user_agent, "CLT"),"Huawei",match(cs_user_agent, "EML"),"Huawei",match(cs_user_agent, "LYA"),"Huawei",match(cs_user_agent, "EVR"),"Huawei",match(cs_user_agent, "BLA"),"Huawei",match(cs_user_agent, "DRA"),"Huawei",match(cs_user_agent, "LDN"),"Huawei",match(cs_user_agent, "YAL-L21"),"Huawei",match(cs_user_agent, "ATU-L22"),"Huawei",match(cs_user_agent, "MAR-LX1A"),"Huawei",match(cs_user_agent, "X11"),"Linux",match(cs_user_agent, "INE-LX2"),"Huawei",match(cs_user_agent, "AMN-"),"Huawei",match(cs_user_agent, "RNE-L22"),"Honor",match(cs_user_agent, "LYO"),"Huawei",match(cs_user_agent, "ANE"),"Huawei",match(cs_user_agent, "STK"),"Huawei",match(cs_user_agent, "BLA"),"Huawei",match(cs_user_agent, "TB3-"),"Lenovo",match(cs_user_agent, "5033T"),"Alcatel",match(cs_user_agent, "5028D"),"Alcatel",match(cs_user_agent, "5002X"),"Alcatel",match(cs_user_agent, "iPhone"),"iPhone",match(cs_user_agent, "20Win64"),"Desktop",1=1,"other") Can any one help me on how do i use  lookup? or automatic lookup so it fills a "human-readable" type into a separate field.   Thanks 
Hi, I tried to search for this online and I am probably not typing my search correctly :-), and am hoping one of you Splunk experts could possibly help me or point me in the right direction. I have... See more...
Hi, I tried to search for this online and I am probably not typing my search correctly :-), and am hoping one of you Splunk experts could possibly help me or point me in the right direction. I have a full Splunk heavy forwarder that is monitoring some network folders (and sending them to some indexes using the outputs.conf configuration). This server is going to be retired, and I am moving this to a new Splunk server installation.  My question is, where does Splunk store the information about the last file it has read ?  I am looking to recreate the Directory Monitor input ont he new server, however, I am assuming that on the new server, it would start reading the folder from scratch again.  I am hoping to stop the old serve's input and start the new server but have the new server read from where the old server left off. Any help would be appreciated.  Thanks so much!  Oh also this is on Microsoft Windows.  
Our problem and question are that the machine agent is installed and IIS there is a consumption of them. It is supposedly a license for all the monitoring, we have a premium license because it re... See more...
Our problem and question are that the machine agent is installed and IIS there is a consumption of them. It is supposedly a license for all the monitoring, we have a premium license because it reports consumption of them and they are almost finished.
Hi Splunkers,   We have installed  Nimbus add-on for sending splunk alert to CAUIM portal. Before upgrade to 8.1 it was working fine but after upgrade of splunk version the splunk is not able to se... See more...
Hi Splunkers,   We have installed  Nimbus add-on for sending splunk alert to CAUIM portal. Before upgrade to 8.1 it was working fine but after upgrade of splunk version the splunk is not able to send alert to CAUIM. We are seeing below errors when checked into logs. As per below error I suspect this error is relating with python version and we have python 3.7 which is needed for splunk version 8.1. Kindly assist to understand what is actually going on.   Error :  09-30-2021 00:22:14.930 +0200 ERROR sendmodalert [56934 AlertNotifierWorker-0] - action=nibus_alerting STDERR - File "/opt/splunk/etc/apps/nimbus-alerting/bin/nimbus_alerting/alert_actions_base.py", line 188, in prepare_meta_for_cam 09-30-2021 00:22:14.930 +0200 ERROR sendmodalert [56934 AlertNotifierWorker-0] - action=nibus_alerting STDERR - File "/opt/splunk/etc/apps/nimbus-alerting/bin/nibus_alerting.py", line 56, in <module>
Hi All, I am looking to create an alert based on the following base search. index=wineventlog w19tax.exe app_name=W19TAX . I am specifically looking for the alert to only trigger when the same SID c... See more...
Hi All, I am looking to create an alert based on the following base search. index=wineventlog w19tax.exe app_name=W19TAX . I am specifically looking for the alert to only trigger when the same SID comes up multiple time for the same application. example event: 09/29/2021 04:21:08 PM LogName=Microsoft-Windows-AppLocker/EXE and DLL SourceName=Microsoft-Windows-AppLocker EventCode=8002 EventType=4 Type=Information ComputerName=BPOLCP01S12.rightnetworks.com User=NOT_TRANSLATED Sid=S-1-5-21-2605281412-2030159296-1019850961-762275 SidType=0 TaskCategory=None OpCode=Info RecordNumber=39961045 Keywords=None Message=D:\PROGRAM FILES\LACERTE\19TAX\W19TAX.EXE was allowed to run.
So I have a search that triggers based upon how much memory is being used on any of my linux machines.      index=nix sourcetype=freemem host=`<mySystemHosts>` | eval pctUsed=(totMemory-cacheMemo... See more...
So I have a search that triggers based upon how much memory is being used on any of my linux machines.      index=nix sourcetype=freemem host=`<mySystemHosts>` | eval pctUsed=(totMemory-cacheMemory)/totMemory * 100 | where pctUsed > 85 | table _time host pctUsed       That alert triggers fine, but I would like to add some details from my 'ps' data set about each individual job that is running on said host.  so let's assume that the above alert triggers, and generates a table of 3 hosts.  I would like to add 'ps' contextual details to each of those devices within the alert itself.      
Attackers might be trying to steal your information from www.   staging.splunk.com (for example, passwords, messages, or credit cards).  NET::ERR_CERT_DATE_INVALID   I am trying on Google Chrome ... See more...
Attackers might be trying to steal your information from www.   staging.splunk.com (for example, passwords, messages, or credit cards).  NET::ERR_CERT_DATE_INVALID   I am trying on Google Chrome in my personal as well as company laptop.  When I click to access my 14 days free trial for splunk cloud page throwing this error.  Thank you in advance for your time.
Need direction, information on any ways like Apps , TAs to work & pull logs from Cloud.gov. Am new at cloud.gov stuff. I appreciate any help. We have Splunk Ent. + ES in our environment.