All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, I'm trying to send traces from an existing website that is built on top of Python (3.9.7) Django(4.1.3) and MySQL(8.0.32) hosted in linux to APM Observability. I'm having problems c... See more...
Hello Splunkers, I'm trying to send traces from an existing website that is built on top of Python (3.9.7) Django(4.1.3) and MySQL(8.0.32) hosted in linux to APM Observability. I'm having problems configuring via python instumentation. here are the steps I did using a virtual environment based on the splunk docs: installed open telemetry collector via curl script installed instumentation packages for python environment ran splunk-py-trace-bootstrap set environment vaiables (OTEL_SERVICE_NAME, OTEL_RESOURCE_ATTRIBUTES, OTEL_EXPORTER_OTLP_ENDPOINT, DJANGO_SETTINGS_MODULE) When I enable the splunk otel python agent this it is giving me the below error : Instrumenting of sqlite3 failed ModuleNotFoundError: No module named '_sqlite3' Failed to auto initialize opentelemetry ModuleNotFoundError: No module named '_sqlite3' Performing system checks... I've already tried reinstalling the sqlite3 and even downloaded from python repository the contents of sqlite3 and manually replaced it on the sqlite3 file but still cannot proceed. any help or direction would be very much apprecaited. thanks!
Hi Guys,   We are collecting the Kubernetes logs using HEC on our Cloud splunk. When ever there is a ERROR entry in the logs , it will have a timestamp in the first line and later lines will be log... See more...
Hi Guys,   We are collecting the Kubernetes logs using HEC on our Cloud splunk. When ever there is a ERROR entry in the logs , it will have a timestamp in the first line and later lines will be logged as below one after one which will have a information about that error.         But when we see it in Splunk console these lines will be splitted as multiple events as below which is leading to confusion.       Is there anyway we can merge these particular lines as single events so that all lines related to any error should be visible as a single event. Please help on this.
Hi Team, I have the table in my dashboard as below: Age Approval Name 61 Approve Sujata 29 Approve Linus 33 Approve Karina 56 Approve Rama Requirement is to update the Appr... See more...
Hi Team, I have the table in my dashboard as below: Age Approval Name 61 Approve Sujata 29 Approve Linus 33 Approve Karina 56 Approve Rama Requirement is to update the Approve to Approved once user click on a particular row and the output should like like below: Age Approval Name 61 Approved Sujata 29 Approve Linus 33 Approve Karina 56 Approve Rama
Hi everyone I created a look up table:   Department,Vendor,Type,url_domain,user,src_ip,Whitelisted BigData,Material,Google Remote Desktop,Alpha.com,Alice,172.16.28.12,TRUE   Then I created a l... See more...
Hi everyone I created a look up table:   Department,Vendor,Type,url_domain,user,src_ip,Whitelisted BigData,Material,Google Remote Desktop,Alpha.com,Alice,172.16.28.12,TRUE   Then I created a look up definition with this match type:   WILDCARD(url_domain), WILDCARD(user), WILDCARD(src_ip)   Then I tested it on following search but it didn't work.   index=fortigate src_ip=172.16.28.12 url_domain=Alpha.com | lookup Whitelist url_domain user src_ip | where isnull(Whitelisted) | table _time, severity, user, url_domain, src_ip, dest_ip, dest_domain, transport, dest_port, vendor_action, app, vendor_eventtype, subtype, devname   and shows all results including traffic from 172.16.28.12 by Alice to the mentioned url  Anyone has any idea what is the issue?
Hello Everyone There is one index cluster, one search header, one management node, and three peers. The configuration is RF=3 and SF=3.  There is a non-clustered indexer, and many universal forward... See more...
Hello Everyone There is one index cluster, one search header, one management node, and three peers. The configuration is RF=3 and SF=3.  There is a non-clustered indexer, and many universal forwarder have sent data to this non-clustered indexer. I want this non-clustered indexer to join this cluster and have this cluster take over the incoming data from this indexer. If this indexer fails, other peers in the cluster can store its data. What should I do?
Hello everyone In the Investigation view, in the Workbench section, I want to add a different artifact type than the ones that appear (asset, identity, file, url), I would like an artifact type: Pro... See more...
Hello everyone In the Investigation view, in the Workbench section, I want to add a different artifact type than the ones that appear (asset, identity, file, url), I would like an artifact type: Process, and another type: Index. Where to add custom artifact types to use in the workbench?
I have a question regarding how to properly extract the time ranges between the Events to use as a field value for a Date-Range column. Im setting up the Chargeback app, and im making a specific repo... See more...
I have a question regarding how to properly extract the time ranges between the Events to use as a field value for a Date-Range column. Im setting up the Chargeback app, and im making a specific report. Currently, Im tracking the total ingestion by the Biz_Unit. The main splunk query does fine, but there's a lot of time manipulation within the search, and im not sure how to properly set the date I need. here is an example of some of the output.    This is the query, i know its a large query, but this outputs all of the fields used in chargeback.    `chargeback_summary_index` source=chargeback_internal_ingestion_tracker idx IN (*) st IN (*) idx="*" earliest=-7d@d latest=now | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time, index_name, st | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time, index_name, st `chargeback_comment(" | `chargeback_data_2_bunit(index,index_name,index_name)` ")` | `chargeback_index_enrichment_priority_order` | `chargeback_get_entitlement(ingest)` | fillnull value=100 perc_ownership | eval shared_idx = if(perc_ownership="100", "No", "Yes") | eval ingestion_idx_st_GB = ingestion_idx_st_GB * perc_ownership / 100 , ingest_unit_cost = ingest_yearly_cost / ingest_entitlement / 365 | fillnull value="Undefined" biz_unit, biz_division, biz_dep, biz_desc, biz_owner, biz_email | fillnull value=0 ingest_unit_cost, ingest_yearly_cost, ingest_entitlement | stats Latest(License) As License Latest(ingest_unit_cost) As ingest_unit_cost Latest(ingest_yearly_cost) As ingest_yearly_cost Latest(ingest_entitlement) As ingest_entitlement_GB Latest(shared_idx) As shared_idx Latest(ingestion_idx_st_GB) As ingestion_idx_st_GB Latest(perc_ownership) As perc_ownership Latest(biz_desc) As biz_desc Latest(biz_owner) As biz_owner Latest(biz_email) As biz_email Values(biz_division) As biz_division by _time, biz_unit, biz_dep, index_name, st | eventstats Sum(ingestion_idx_st_GB) As ingestion_idx_GB by _time, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_dep_GB by _time, biz_unit, biz_dep, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_GB by _time, biz_unit, index_name | eval ingestion_idx_st_TB = ingestion_idx_st_GB / 1024 , ingestion_idx_st_PB = ingestion_idx_st_TB / 1024 ,ingestion_idx_TB = ingestion_idx_GB / 1024 , ingestion_idx_PB = ingestion_idx_TB / 1024 , ingestion_bunit_dep_TB = ingestion_bunit_dep_GB / 1024 , ingestion_bunit_dep_PB = ingestion_bunit_dep_TB / 1024, ingestion_bunit_TB = ingestion_idx_GB / 1024 , ingestion_bunit_PB = ingestion_bunit_TB / 1024 | eval ingestion_bunit_dep_cost = ingestion_bunit_dep_GB * ingest_unit_cost, ingestion_bunit_cost = ingestion_bunit_GB * ingest_unit_cost, ingestion_idx_st_cost = ingestion_idx_st_GB * ingest_unit_cost | eval ingest_entitlement_TB = ingest_entitlement_GB / 1024, ingest_entitlement_PB = ingest_entitlement_TB / 1024 | eval Time_Period = strftime(_time, "%a %b %d %Y") | search biz_unit IN ("*") biz_dep IN ("*") shared_idx=* _time IN (*) biz_owner IN ("*") biz_desc IN ("*") biz_unit IN ("*") | table biz_unit biz_dep Time_Period index_name st perc_ownership ingestion_idx_GB ingestion_idx_st_GB ingestion_bunit_dep_GB ingestion_bunit_GB ingestion_bunit_dep_cost ingestion_bunit_cost biz_desc biz_owner biz_email | sort 0 - ingestion_idx_GB | rename st As Sourcetype ingestion_bunit_dep_cost as "Cost B-Unit/Dep", ingestion_bunit_cost As "Cost B-Unit", biz_unit As B-Unit, biz_dep As Department, index_name As Index, perc_ownership As "% Ownership", ingestion_idx_st_GB AS "Ingestion Sourcetype GB", ingestion_idx_GB As "Ingestion Index GB", ingestion_bunit_dep_GB As "Ingestion B-Unit/Dep GB",ingestion_bunit_GB As "Ingestion B-Unit GB", biz_desc As "Business Description", biz_owner As "Business Owner", biz_email As "Business Email" | fieldformat Cost B-Unit/Dep = printf("%'.2f USD",'Cost B-Unit/Dep') | fieldformat Cost B-Unit = printf("%'.2f USD",'Cost B-Unit') | search Index = testing | dedup Time_Period | table B-Unit Time_Period "Ingestion B-Unit GB"   The above image shows what im trying to extract. The query has binned _time twice:   | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time, index_name, st | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time, index_name, st   Ive asked our GPT equivalent bot how to properly do it, and it mentioned that when im sorting the stats by _time and index, it was overwriting the time variable. it also kept recommending me change and eval time down near the bottom of the query, something like:   | stats sum(Ingestion_Index_GB) as Ingestion_Index_GB sum("Ingestion B-Unit GB") as "Ingestion B-Unit GB" sum("Cost B-Unit") as "Cost B-Unit" earliest(_time) as early_time latest(_time) as late_time by B-Unit | eval Date_Range = strftime(early_time, "%Y-%m-%d %H:%M:%S") . " - " . strftime(late_time, "%Y-%m-%d %H:%M:%S") | table Date_Range B-Unit Ingestion_Index_GB "Ingestion B-Unit GB" "Cost B-Unit"     Other instances it said that it wasnt in string format, so i couldnt use the strftime.    overall, im now confused as to what is happening to the _time value. All i want is to get the earliest and latest value by index and set that as Date_Range. Can someone help me with this and possibly explain what is happening to the _time variable as it keeps getting manipulated and sorted by.    This is the search query found in the chargeback app under the storage tab. Its the "Daily Ingestion By Index, B-Unit & Department" search query.  if anyone has any ideas, any help would be much appreciated. 
Hello, | dbxquery connection=test query="select employee_data from company" The following employee_data is not in proper JSON format, so I can't use spath. How do I replace single quote (') with d... See more...
Hello, | dbxquery connection=test query="select employee_data from company" The following employee_data is not in proper JSON format, so I can't use spath. How do I replace single quote (') with double quote ("), replace None with "None" and put it on a new field? Thank you for your help. employee_data [{company':'company A','name': 'employee A1','position': None}, {company': 'company A','name': 'employee A2','position': None}] [{company':'company B','name': 'employee B1','position': None}, {company': 'company B','name': 'employee B2','position': None}] [{company':'company C','name': 'employee C1','position': None}, {company': 'company C','name': 'employee C2','position': None}]  
Hello, I have a question about how to pull custom method data collector values and add them to custom metrics which can be used in dashboard widgets on app dynamics. I have configured the data colle... See more...
Hello, I have a question about how to pull custom method data collector values and add them to custom metrics which can be used in dashboard widgets on app dynamics. I have configured the data collectors to pull the values from a given endpoint and have validated the values are being pulled from snapshots, however when I navigate to the analytics tab and search for the custom method data it is not present. I have double checked that transaction analytics is enabled for this application's business transaction in question, and the data collector is shown in the transaction analytics - manual data collectors section of analytics. The only issue is getting these custom method data collectors to populate in the Custom Method Data section of the search tab of analytics so that I can create custom metrics on this data. Any help is much appreciated!
Been receiving this error from my UF. extremely frustrating since splunk doesn't offer any support unless your paying them. -did then system daemon reload  - enable/disable boot-start - reviewed s... See more...
Been receiving this error from my UF. extremely frustrating since splunk doesn't offer any support unless your paying them. -did then system daemon reload  - enable/disable boot-start - reviewed splunkd.log -Somtimes it would say splunk.pid doesnt exist. -What the hell is going on here, failures for both  Ubuntu and AWS Splunk FW: Receiving the following error: "failed to start splunk.service: unit splunk.service not found" SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: error (Reason: Unit SplunkForwarder.service failed to load properly, please adjust/correct and reload service manager: Device or resource busy) Active: failed (Result: signal) since Wed 2024-01-17 20:04:18 UTC; 13s ago Duration: 1min 48.199s Main PID: 14888 (code=killed, signal=KILL) CPU: 2.337s  
Is there a way to export the content management list to excel? I want to go over them with my team and it would be faster to have the full list of objects to determine what we want to enable.
Hi, I am having issues passing value into savedsearch Below is the simplified version of my query: | inputlookup alert_thresholds.csv | search Alert="HTTP 500" | stats values(Critical) as Cr... See more...
Hi, I am having issues passing value into savedsearch Below is the simplified version of my query: | inputlookup alert_thresholds.csv | search Alert="HTTP 500" | stats values(Critical) as Critical | appendcols [| savedsearch "Events_list" perc=Critical] basically what I want to do is to use Critical value as the value of perc in subsearch but it seems to not work correctly. I get no results. When I replace Critical with 10 in the subsearch it works just fine.
we have a log ingestion from aws cloud env via HTTP event collector to splunk , one of the user reporting some of the logs which is missing in splunk is there any log file to validate this or if ther... See more...
we have a log ingestion from aws cloud env via HTTP event collector to splunk , one of the user reporting some of the logs which is missing in splunk is there any log file to validate this or if there is any connectivity drop in http to cloud apps how to validate this 
Events are merging like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T1... See more...
Events are merging like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T15:26:50.032387-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running2022-02-02T15:26:50.488943-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=5fdc6ec-01f0-41d5-8a33-d58b5efre2022-02-02T15:26:50.496126-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=6fe48c-20ee-4f7b-bf88-22ed5dfdd2022-02-02T15:26:50.502563-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=bcd5c461-9d23-4c79-8509-4af76c03ff5a2022-02-02T15:26:50.505764-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=bbb9449e-2893-4d06-bc51-edfdd42022-02-02T15:26:50.512171-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=155c7a37-69bc-44d2-98ac-cb75831a7c472022-02-02T15:26:50.517049-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=a575dfde3eb-4ca6-be2d-4491a4b59fe02022-02-02T15:33:33.669982-05:00 mycompany: syslog initialised2022-02-02T15:33:40.935228-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T15:33:41.990171-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running2022-02-02T15:35:34.533063-05:00 mycompany: syslog initialised2022-02-02T15:35:42.168799-05:00 mycompany: [Portal|SYSTEM|20001 I am expecting logs should break on timestamps like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised 2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting 2022-02-02T15:26:50.032387-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running
Hello to all, really hoping I can make sense while asking this....    I'm an entry level  IT Security Specialist and I have been tasked with re-writing our current query for overnight logins as our e... See more...
Hello to all, really hoping I can make sense while asking this....    I'm an entry level  IT Security Specialist and I have been tasked with re-writing our current query for overnight logins as our existing query does not put out the correct information we need.  Here is the current query: source=WinEventLog:Security EventCode=4624 OR (EventCode=4776 Keywords="Audit Success") | eval Account = mvindex(Account_Name, 1) | eval TimeHour = Strftime(_time, "%H") | eval Source = coalesce(Source_Network_Address, Sorce_Workstation) | eval Source=if(Source="127.0.0.1" or Source="::1" OR Source="-" OR Source="", hos, Source) | where (Time_Hour > 20 AND Time_Hour <24) OR (Time_Hour > 0 AND Time_Hour < 5) | bin _time span=12h aligntime=@d+20h | eval NightOf = strftime(_time "%m/%d/%Y) | lookup dnslookup clienttip as Source OUTPUT clienthost as SourceDevice | search NOT Account="*$" NOT Account=HealthMail*" NOT Account="System" | stats count as LoginEvents values(sourceDevice) as SourceDevices by Account NightOf | sort NightOfAccount SourceDevices | table NightOf Account Source Devices LoginEvents I need to somehow add an exclusion to the query for logon type 3, (meaning for splunk to omit them from its search), as well as add our asset to the query, that way splunk will only target searches from that particular asset.   I know nothing about coding, or scripts, and my boss just thought it would be super fun if the guy with the least experience try to figure it all out since the current query does not give us the data that we need for our audits.  In a nutshell, we need splunk to tell us who was logged in between 8pm-5am, that it was a logon type 2 , and what computer system they were on.  If anyone could help out an absolute noob here I would greatly appreciate it!  
Hi All,    I have particular issue when getting data from kv store is working fine. But saving anything using  helper.save_check_point  is failling. Also added logs and found that this issue is o... See more...
Hi All,    I have particular issue when getting data from kv store is working fine. But saving anything using  helper.save_check_point  is failling. Also added logs and found that this issue is only for  batch_save post API which splunk uses internaly and error I get is                  File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 288, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response                
Please help us to fix the below installation issue. It seems the Splunk is trying to find some file in the system but unable to fetch/identify which is that file? We tried to uninstall the previous... See more...
Please help us to fix the below installation issue. It seems the Splunk is trying to find some file in the system but unable to fetch/identify which is that file? We tried to uninstall the previous setup, removed all reg_key as-well but still we are facing the same error. We tried to run the previous version (v_7.x), getting the same error (the system cannot find the path specified) C:\Windows\Temp\splunk>Splunkinstall.bat C:\Windows\Temp\splunk>msiexec /i "splunkforwarder-9.1.2-xxx-x64-release.msi" AGREETOLICENSE=Yes /quiet C:\Windows\Temp\splunk>net stop SplunkForwarder The SplunkForwarder Service service is not started. More help is available by typing NET HELPMSG 3521. C:\Windows\Temp\splunk>copy deploymentclient.conf "c:\Program Files\splunkuniversalforwarder\etc\system\default\" 0 file(s) copied. C:\Windows\Temp\splunk>net start SplunkForwarder System error 3 has occurred. The system cannot find the path specified.
Hi AlI, I have a very specific migration. I am migrating from 5 indexer single site cluster to a 4 indexer multisite cluster 2 indexers each site. I have couple of questions around it? first thing... See more...
Hi AlI, I have a very specific migration. I am migrating from 5 indexer single site cluster to a 4 indexer multisite cluster 2 indexers each site. I have couple of questions around it? first thing is current indexers are all hot storage - want to change this in new hardware to hot and cold and as Splunk appsizing  is no more available need help with some calculations? secondly how to make sure that data from 5 indexers is not missed while migrating to 2? regards, Kulvinder Singh @richgalloway @PickleRick @gcusello 
I try to do box plot using viz. But I can see the "trace 0" data graph in box plot. ( I don't have any data called "trace 0")   This is my code, <row> <panel> <viz type="splunk_plotly_collec... See more...
I try to do box plot using viz. But I can see the "trace 0" data graph in box plot. ( I don't have any data called "trace 0")   This is my code, <row> <panel> <viz type="splunk_plotly_collection_viz.boxplot"> <search> <query> ..... | eval total_time=case(time<= 8, "8", time<= 9, "8~9", time<= 10, "9~10", time<= 11, "10~11", time<= 15, "11~15", time<= 20, "15~20") | table total_time init_dt </query> </search> <option name="drilldown">all</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> </viz> </panel> </row>  and this is the current state of my graph. How could I delete "trace 0" in the graph?  
I have a field which have values only with numbers and also with combination of number and special characters as values. I would like to filter the field values where both number and special characte... See more...
I have a field which have values only with numbers and also with combination of number and special characters as values. I would like to filter the field values where both number and special characters are in it. Example: Log 1 -> field1="238_345$345" Log 2 -> field1="+739-8883 Log 3 -> field1="542.789#298" Already I have tried in writing regex query but there is no expression to filter out the combination of digits & special characters. (No expression to filter all the special character). How can I filter and display the field value which have the combination of number and special characters? Could anyone help me on this?