All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"numbe... See more...
Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"number"},{"text":"drops","type":"number"}],"rows":[["BM0077",35602782,3043.01,0],["BM1604",2920978,4959.1,2],["BM1612",2141607,5623.3,6],["BM2870",41825122,2545.34,7],["BM1834",74963092,2409.0,8],["BM0267",86497692,1804.55,44],["BM059",1630092,5684.5,0]],"type":"table"}]   I tried to extract each field so that each value  corresponds to id,event,delays and drops as a table using the below command.    index=result | rex field=_raw max_match=0 "\[\"(?<id>[^\"]+)\",\s*(?<event>\d+),\s*(?<delays>\d+\.\d+),\s*(?<drops>\d+)" | table id  event delays drops    I get the result in table format , however it spits out as one whole table and not individual entries and I cannot manipulate the result.  I have tried using mvexpand , however it can only do for one value, so have not been helpful as well.    Does anyone know how we can properly get the table in splunk . 
For non-persistent Instant-clones using VMware's clone-prep I have installed the UF with the launchsplunk=0 onto the master/gold image. The I run "splunk clone-prep-clear-config", set the service to ... See more...
For non-persistent Instant-clones using VMware's clone-prep I have installed the UF with the launchsplunk=0 onto the master/gold image. The I run "splunk clone-prep-clear-config", set the service to manual so it doesn't start automatically on the master/gold image and publishing the desktops. Then I have a scheduled task that runs a few minutes after the user logons the calls an elevated command to "splunk.exe restart" to erase the GUID and generate a new GUID prior to the splunkd service starting.  Is there a way for the process that this invokes to run silently? ie no pop-up screen
  To obtain the results in a dashboard I am using following things. 1.) First I created datamodel 2.) Datamodel I have used in macros which is running 1h and 1d basis. 3. pass those macros in saved ... See more...
  To obtain the results in a dashboard I am using following things. 1.) First I created datamodel 2.) Datamodel I have used in macros which is running 1h and 1d basis. 3. pass those macros in saved search and collect the results in hourly and daily basis. 4. Results of the span_token is passing to macro from the below dashboard code. 5. As I am attaching macros and saved searches at the end of the dashboard code. Issue : I am not getting proper results by using this approach and dashboard is not populating results properly. I need gidance to fix the issue. ==================================================================== <form version="1.1" theme="light"> <label>Throughput : Highbay</label> <init> <set token="span_token">$form.span_token$</set> </init> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="time" token="time" id="my_date_range" searchWhenChanged="true"> <label>Select the Time Range</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> <change> <eval token="time.earliest_epoch">if('earliest'="",0,if(isnum(strptime('earliest', "%s")),'earliest',relative_time(now(),'earliest')))</eval> <eval token="time.latest_epoch">if(isnum(strptime('latest', "%s")),'latest',relative_time(now(),'latest'))</eval> <eval token="macro_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "throughput_macro_summary_1d",if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "throughput_macro_summary_1h","throughput_macro_raw"))</eval> <eval token="form.span_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "d", if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "h", $form.span_token$))</eval> </change> </input> </panel> </row> <row> <panel> <chart> <title>Total Pallet</title> <search> <query>|`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" |strcat "raw" "," location group_name | timechart sum(count) as cnt by location</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <chart> <title>Pallet IN</title> <search> <query>|`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Entry*" |strcat "raw" "," location group_name | timechart sum(count) as cnt by location</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <chart> <title>Pallet OUT</title> <search> <query>|`$macro_token$(span_token="$span_token$")` | search LocationQualifiedName="*/Aisle*Exit*" |strcat "raw" "," location group_name | timechart sum(count) as cnt by location</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form> ======================================= Macros: throughput_macro_raw(1) datamodel Walmart_throughput Highbay_throughput flat | bin _time span="$span_token$" | rename AsrTsuEventTrackingUpdate.LocationQualifiedName as LocationQualifiedName | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year throughput_macro_summary_1d(1) search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="$span_token$" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count throughput_macro_summary_1h(1) search index="tput_summary" sourcetype="tput_summary_1h" | bin _time span=$span_token$ | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count saved searches: throughput_summary_index_1d | `throughput_macro_raw(span_token="1d")` |strcat "raw" "," location group_name |strcat "raw" "," location group_name | stats count by location _time LocationQualifiedName | collect index="tput_summary" sourcetype="tput_summary_1d" throughput_summary_index_1h | `throughput_macro_raw(span_token="1h")` |strcat "raw" "," location group_name | stats count by location _time LocationQualifiedName | collect index="tput_summary" sourcetype="tput_summary_1h"  
Hi,  I have 4 fields in my index  ID, Method, URL, HTTP_responsecode ID is in the form of XXXX-YYYY-ZZZZ-AAAA,  Now, I want to delimit the ID column and extract YYYY value then run a stats comman... See more...
Hi,  I have 4 fields in my index  ID, Method, URL, HTTP_responsecode ID is in the form of XXXX-YYYY-ZZZZ-AAAA,  Now, I want to delimit the ID column and extract YYYY value then run a stats command with the delimited value by HTTP_responsecode Something as below  Delimited_ID HTTP_responsecode Count YYYY 200 10   Please could you help on how to delimit the value in the above format mentioned and how to use the new delimited value in a stats command 
Hi All,  What are some of the best ways to monitor the health status of KV store of the heavy forwarders in my environment. I am looking for a way to monitor it with a search from my search head. ... See more...
Hi All,  What are some of the best ways to monitor the health status of KV store of the heavy forwarders in my environment. I am looking for a way to monitor it with a search from my search head.      Thanks in advance! 
We have an alert where the cron schedule runs for every 6hours 0 */6 * * * but I don’t want to receive the alert at 6pm only how can I write a corn for that???
Hi AppDynamics Community, I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm l... See more...
Hi AppDynamics Community, I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm launching my FastAPI app with the following command to include the AppDynamics agent: pyagent run -c appdynamics.cfg uvicorn my_app:app --reload My goal is to reduce the verbosity of the logs from both the AppDynamics agent and the proxy that are output to stdout, aiming to keep my console output clean and focused on more critical issues. My module versions: $ pip freeze | grep appdy appdynamics==23.10.0.6327 appdynamics-bindeps-linux-x64==23.10.0 appdynamics-proxysupport-linux-x64==11.64.3 Here's the content of my `appdynamics.cfg` configuration file: [agent] app = my-app tier = my-tier node = teste-local-01 [controller] host = my-controller.saas.appdynamics.com port = 443 ssl = true account = my-account accesskey = my-key [log] level = warning debugging = off I attempted to decrease the log verbosity further by modifying the `log4j.xml` file for the proxy to set the logging level to WARNING. However, this change didn't have the effect I was hoping for. The `log4j.xml` file I adjusted is located at: /tmp/appd/lib/cp311-cp311-63ff661bc175896c1717899ca23edc8f5fa87629d9e3bcd02cf4303ea4836f9f/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j.xml Here are the adjustments I made to the `log4j.xml`: <appender class="com.singularity.util.org.apache.log4j.ConsoleAppender" name="ConsoleAppender"> <layout class="com.singularity.util.org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p [%t] %c{1} - %m%n" /> </layout> <filter class="com.singularity.util.org.apache.log4j.varia.LevelRangeFilter"> <param name="LevelMax" value="FATAL" /> <param name="LevelMin" value="WARNING" /> </filter> Despite these efforts, I'm still seeing a high volume of logs from both the agent and proxy. Could anyone provide guidance or suggestions on how to effectively lower the log output to stdout for both the AppDynamics Python Agent and its proxy? Any tips on ensuring my changes to `log4j.xml` are correctly applied would also be greatly appreciated. Thank you in advance for your help! Example of logging messages I would like to remove from my stdout: 2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759 2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759 ... [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:51 BRT 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm [AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - UUIDPool size is 10 Agent conf directory set to [/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf] ... 11:15:52,167 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Starting BT Logs at Sat Mar 23 11:15:52 BRT 2024 11:15:52,168 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - ########################################################### 11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Using Proxy Version [Python Agent v23.10.0.6327 (proxy v23.10.0.35234) compatible with 4.5.0.21130 Python Version 3.11.6] 11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] JavaAgent - Logging set up for log4j2 ... 11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] JDBCConfiguration - Setting normalizePreparedStatements to true 11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] CallGraphConfigHandler - Call Graph Config Changed callgraph-granularity-in-ms Value -null
HI, I have a single query to get all types of data in table. for one particular type I have an issue with the null values, i need to remove those null value results for the particular type only wit... See more...
HI, I have a single query to get all types of data in table. for one particular type I have an issue with the null values, i need to remove those null value results for the particular type only without effecting the other types of data. I need to remove those null values in that "error message" field for type 1 , for Type 2 it should be as it is. Thanks in advance.
Here is my search in question, the common field is the SessionID index=eis_lb apm_eis_rdp |fillnull value="-" |search UserID!="-" | rex field=_raw "\/Common\/apm_eis_rdp:ent-eis[:a-zA-Z0-9_.-](?'Se... See more...
Here is my search in question, the common field is the SessionID index=eis_lb apm_eis_rdp |fillnull value="-" |search UserID!="-" | rex field=_raw "\/Common\/apm_eis_rdp:ent-eis[:a-zA-Z0-9_.-](?'SessionID'........)" |search company_info="*" |rename company_info as "Agency" | table _time, SessionID, UserID,Full_Name, Agency, HostName, client_ip | sort - _time _time SessionID UserID Full_Name Agency HostName client_ip 2024-03-22 08:25:29 4f89ae57 Redacted Redacted Redacted Redacted - If I remove the Search UserID I can see the matching session ID and the client_ip is present. _time                               SessionID       UserID    Full_Name    Agency      HostName              client_ip 2024-03-22 14:26:48 4f89ae57     Redacted Redacted    Redacted   Redacted                    - 2024-03-22 14:25:52 4f89ae57 - - - -                                                                                                 Redacted How can I create a search like above to show the client_ip maching the SessionID
Hi! I have a dashboard with two parts - a table based on an existing dataset, and a column chart based on this query:   | bucket _time span=day | stats count by _time   The full table code looks... See more...
Hi! I have a dashboard with two parts - a table based on an existing dataset, and a column chart based on this query:   | bucket _time span=day | stats count by _time   The full table code looks like this:   { "type": "splunk.column", "dataSources": { "primary": "..." }, "title": "...", "options": { "x": "> primary | seriesByName('_time')", "y": "> primary | frameBySeriesNames('count')", "legendDisplay": "off", "xAxisTitleVisibility": "hide", "yAxisTitleText": "...", "showYAxisWithZero": true }, "eventHandlers": [], "context": {}, "showProgressBar": false, "showLastUpdated": false }   I want a click on any column to filter the table based on global_time - if I click on March 22, it filters the table to only show records where the _time is Mar 22 00:00:00 to Mar 22 23:59:59. How do I do that?
receiving the following error when trying to run "./splunk show cluster-bundle-status" 'Failed to contact the cluster manager. ERROR:  Cluster manager is not enabled on this node. "    A duplicat... See more...
receiving the following error when trying to run "./splunk show cluster-bundle-status" 'Failed to contact the cluster manager. ERROR:  Cluster manager is not enabled on this node. "    A duplicate error is displayed for the peers.  But when I sign into the cluster manager and go to indexer clustering there is all my 4 indexers on the dashboard and the manager node is properly set in configuration. I've even double checked the .conf files.  Any suggestions?    
Is there an existing Splunk log that would identify the time an entity is "retired" in Splunk ITSI? I recently had a significant amount of my entities retire for some reason despite the entities s... See more...
Is there an existing Splunk log that would identify the time an entity is "retired" in Splunk ITSI? I recently had a significant amount of my entities retire for some reason despite the entities still sending metrics data to the metrics indexes. I do have an auto-retire policy in place, but I do not believe that any of the entities in question would not have sent data in the amount of time needed for the auto-retire policy to trigger on them. I am hoping to find a log that would help me identify when entities were retired and how they were retired, be it by the auto-retire policy or an admin making a mistake somehow.
I am trying to compare an IP address field called ex_ip thats stored in a lookup file with an index called activity which contains dest, src and a few other fields. I am trying to match the ex_ip fro... See more...
I am trying to compare an IP address field called ex_ip thats stored in a lookup file with an index called activity which contains dest, src and a few other fields. I am trying to match the ex_ip from the lookup file with the dest IP from the activity index. My following query is not resulting in any matches. Any help would be appreciated. index="activity" |lookup activity2 ex_ip as lb OUTPUT ex_ip as match |eval match=if(LIKE('dest', 'ex_ip'), 1, 0) |search match=1  
I am having trouble with my search. I am finding groups and my groups are broken down into organization, unit, and subunit. The tokens are being passed in for each respective part of the group.  ex... See more...
I am having trouble with my search. I am finding groups and my groups are broken down into organization, unit, and subunit. The tokens are being passed in for each respective part of the group.  example: Group1: apple.banana.orange Group2: apple. banana.grape Group3: melon.berry index | search organization = $org$ | search unit = $unit$ | search subunit = $subunit$ | eval group = organization."."unit."."subunit This would output apple.bananan.orange and apple.banana.grape, but would not show anything for melon.berry Sometimes I have groups that do not have subunits. When I tried to add the fillnulll: index | search organization = $org$ | search unit = $unit$ | fillnull value="" $subunit$ | eval group =if(isnotnull($subunit$), organization."."unit."."subunit, "organization.".".unit) That worked for groups with no subunit, but then the groups that did have subunits it did not work. This would output melon.berry, but it would output all the events for apple.banana. It wouldn't do the search specifically for orange or grape.  I am trying to have my search handle when a subunit token is passed and it is blank, what to do with it to output the correct values.   
I was looking into the splunk integration with hadoop and saw that it's on schedule for EOL (Jan 2025 per https://docs.splunk.com/Documentation/Splunk/9.2.0/HadoopAnalytics/MeetSplunkAnalyticsforHado... See more...
I was looking into the splunk integration with hadoop and saw that it's on schedule for EOL (Jan 2025 per https://docs.splunk.com/Documentation/Splunk/9.2.0/HadoopAnalytics/MeetSplunkAnalyticsforHadoop).  I know it's changed around a few times, like there used to be a "hadoop connect" app, before "Splunk Analytics for Hadoop". Is this happening again where it's just moving somewhere else, or is it totally gone now? Nothing else to substitute it?
Hi guys can you please help me ? I'm trying to use a space as thousands separator and I can't, the max that I could it's a comma with this: eval value= if(value!="N/A",printf("%'d",value),v... See more...
Hi guys can you please help me ? I'm trying to use a space as thousands separator and I can't, the max that I could it's a comma with this: eval value= if(value!="N/A",printf("%'d",value),value) Result = 123,456 so I guess I can change it with a replace maybe, But then we have problem number 2 , When I try to sort the by value in the arrow of the column, the sort it isn't correct, and the bigger numbers are considerate as strings.  Can you guys help me solve this please ? Tell me if you need more things 
Hello I have some linux systems that run in cron every day this line: /usr/bin/nmon -f -t -s 300 -c 288 -m /var/log/nmon/ As a result, I have one file per day with nmon metrics. These servers don'... See more...
Hello I have some linux systems that run in cron every day this line: /usr/bin/nmon -f -t -s 300 -c 288 -m /var/log/nmon/ As a result, I have one file per day with nmon metrics. These servers don't have communication with splunk. My question is, it is posible to ingests these files to NMON Splunk App to analyze them ? I supouse to manually load the files in the index nmon but I'm not sure if first I have to do something before. Thank you in advance
I have below query to calculate average response times. For some reason some times the value is coming as '0'. i wanted to remove those values from my calculation.  | mstats sum(calc:service.thaa_s... See more...
I have below query to calculate average response times. For some reason some times the value is coming as '0'. i wanted to remove those values from my calculation.  | mstats sum(calc:service.thaa_stress_requests_count_lr_tags) As "Count" ,avg(calc:service.thaa_stress_requests_lr_tags) As "Response" where index=itsi_im_metrics by Dimension.id | eval Response=round((Response/1000000),2), Count=round(Count,0) | search Dimension.id IN ("*Process.aspx") -- Sample Values  metric_name:calc:service.thaa_stress_requests_lr_tags: 4115725 metric_name:calc:service.thaa_stress_requests_lr_tags: 0 metric_name:calc:service.thaa_stress_requests_lr_tags: 3692799
Hi, my event has unstructured data i.e. few strings than xml part than few more strings and another xml follow by few more strings. How do I extract only the xml parts from the event when there is ... See more...
Hi, my event has unstructured data i.e. few strings than xml part than few more strings and another xml follow by few more strings. How do I extract only the xml parts from the event when there is no pattern  to the string i.e. number of lines before and after the xml nor the string content has a pattern.    
Trying to figure out how to extract a field using regex to capture the entire string.  Only problem is there are a bunch of slashes throughout.  Sometimes one, sometimes 3, etc.  I've tried variation... See more...
Trying to figure out how to extract a field using regex to capture the entire string.  Only problem is there are a bunch of slashes throughout.  Sometimes one, sometimes 3, etc.  I've tried variations of commands I found in the documentation but no luck.  is this possible? Example String of Field I want to extract with all the context appended to one another minus the slashes: \"\\\"Field1\\\":\"context"\\\",\\\:"\"context"\\\",\\\:"\"context"\\\",\\\:"\"context"\\\",\\\:"\"context"\\\"context\\\\\\\context\\\\\\\\Field2 Want it to be extracted like this: Field1="context","context" etc so slashes are eliminated.  Appreciate any help.