All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a spring boot application, deployed in AKS. I am using Appynamics docker image to inject the agent into JVM. The setup is working fine in lower environments. Even in the production environment... See more...
I have a spring boot application, deployed in AKS. I am using Appynamics docker image to inject the agent into JVM. The setup is working fine in lower environments. Even in the production environment, everything worked fine for 1-2 days. Then I configured the transaction detection rules of java to include a custom header in the transaction name, making them similar to what we have in non-production environments. It worked for a few hours, then it stopped capturing any business transactions and transaction snapshots. I can still see the traffic using the Service Endpoints, where the number of calls and the average response time is captured. I tried to look at the events, and there I can see some events like New Business Transaction Discovered /xxx.yyy (But no data for this in Business Transaction Tab) Errors like - Application Server Exception Agent Data Diagnostics - OVERFLOW_BT Currently, the production environment has less traffic than the non-production environment. And apart from this application, we only have a few Browser and Mobile apps, but no other Java/.Net application. The application has not been restarted since I configured the business transaction name. Tried setting "find-entry-points"  to true, but could not make much sense from its logs.
Hello Splunkers, Using props.conf file, is it possible to combine multiple [<spec>] stanza ? I would like to set up a specific LINE_BREAKER but only for a source of a specific host... Is somethi... See more...
Hello Splunkers, Using props.conf file, is it possible to combine multiple [<spec>] stanza ? I would like to set up a specific LINE_BREAKER but only for a source of a specific host... Is something like [host::<whatever>] + [source::<whatever>] possible ? Thanks a lot, GaetanVP
Hello everyone,  I have a question with dashboard studio, in JSON format. I made this dashboard and I want a specific color for the "diff" field.  When diff is <0 : red color when diff is > 0 : g... See more...
Hello everyone,  I have a question with dashboard studio, in JSON format. I made this dashboard and I want a specific color for the "diff" field.  When diff is <0 : red color when diff is > 0 : green color  My spl request is : `easyVista` source="incidents_jour" | dedup "N° d'Incident" |rename "Statut de l'incident" as statut |eval STATUT=case( match(statut,"Résolu"),"fermé", match(statut,"Clôturé"),"fermé", match(statut,"Annulé"),"fermé", match(statut,"Archivé"),"fermé", match(statut,"A prendre en compte"),"ouvert", match(statut,"Suspendu"),"ouvert", match(statut,"En cours"),"ouvert", match(statut,"Escaladé"),"ouvert") |timechart count by STATUT usenull=f | eval diff=fermé-ouvert Can you help me please ? Thanks  
I have some error logs like below:     TYPE=ERROR, DATE_TIME=2022-12-31 03:30:27,281, CLASS_NAME=myClass, METHOD_NAME=myMethod, message=unknown error while fetching recordsjavax.net.ssl.SSLHand... See more...
I have some error logs like below:     TYPE=ERROR, DATE_TIME=2022-12-31 03:30:27,281, CLASS_NAME=myClass, METHOD_NAME=myMethod, message=unknown error while fetching recordsjavax.net.ssl.SSLHandshakeException: General SSLEngine problem TYPE=ERROR, DATE_TIME=2023-01-19 00:38:09,013, CLASS_NAME=myClass, METHOD_NAME=myMethod, message=unknown error while fetching recordsjava.lang.IllegalStateException: could not create the default ssl context     I need to get the message field of these logs and to sort them based on _time. I am trying the below queries to get the same. Query 1:     index="myIndes" host=myHost source="/my/app/location/app.log" TYPE=ERROR | extract pairdelim=",|" kvdelim="=" | table message | sort _time     Query 2:     index="myIndes" host=myHost source="/my/app/location/app.log" TYPE=ERROR | rex "^(?:(?<message>[^,]*),){7}" | table message | sort _time     But for both these queries, my response looks like below. Instead of getting the entire message field, I am getting just the first word.     -------- message -------- unknown     Please help on how to achieve this.
Hi, I need to show error messages for one particular service. But the challenge here is that for example , I need to show error messages including "withdrawal failed" error. But if this error hap... See more...
Hi, I need to show error messages for one particular service. But the challenge here is that for example , I need to show error messages including "withdrawal failed" error. But if this error happened because of "insufficient  balance" then this withdrawal failed should not be listed. Other error messages should be listed anyway. e.g query: index=abc sourcetype=payment log_level=Error |table  message. Result would be like: withdrawal failed deposit failed Error 404 customer not found But i want something like, index=abc sourcetype=payment log_level=Error and | search message="*Withdrawal failed*" | join type=inner requestId [search index=abc | search message ="*insufficient balance*"]  --> if this part is true then it should not list "Withdrawal failed error" Results should be like, deposit failed Error 404 customer not found   Please help me with the search. Thanks In advance.  
Hi..  I have suggested some of my new team mates to open a splunk user account(for free splunk trainings and to login to splunk community as well) but when my team mates are trying to use gmail i... See more...
Hi..  I have suggested some of my new team mates to open a splunk user account(for free splunk trainings and to login to splunk community as well) but when my team mates are trying to use gmail id to register for a splunk user account, they get error as: Due to US export compliance requirements, Splunk has suspended your access. (my team mates are facing this issue around for a more than a month period and even just now.) Any ideas/suggestions please. 
Hi, I am building a modular input which grabs data from an API on a regular basis. I'd like to let the app user configure the polling interval so that they can manage it against rate limits, but I... See more...
Hi, I am building a modular input which grabs data from an API on a regular basis. I'd like to let the app user configure the polling interval so that they can manage it against rate limits, but I am hitting some really weird behaviour. If I specify a value for interval in inputs.conf, but do not include interval in inputs.conf.spec, the modular input runs on schedule. Of course, this means that the interval is not configurable by the user on the data input page. However, if I include interval in inputs.conf.spec so that the user can configure it, the modular input runs once and never again. This is reflected in the splunkd logs too - when the interval is included in inputs.conf.spec, the scheduler just shows 'run once'. If the interval is not in the spec file, the scheduler schedules the input properly. I've tested this a few times and managed to repeatedly recreate the behaviour - and I also created a barebones input just to make sure that it wasn't something I had done in developing my input. The same problem occurred. Does anyone know what is going on here? Is it possible to let app users configure the interval parameter? Cheers, Matt
Hello, I have a custom time range picker, which I defined in the local times.conf of my app: /opt/splunkdev2/splunk/etc/apps/FRUN/local/times.conf: ... [EVENT_20220819] header_label = UPG label... See more...
Hello, I have a custom time range picker, which I defined in the local times.conf of my app: /opt/splunkdev2/splunk/etc/apps/FRUN/local/times.conf: ... [EVENT_20220819] header_label = UPG label = 2208 earliest_time = 20220819 latest_time = 20230217 order = 501 ... It works kind of okay, but I am not getting my label displayed when I pick the time in the dashboard, it gets translated to the date: when I choose from the picker: I will get displayed in the dashboard: What do I do wrong?   The second question would be, if there is a way to set the default of the time picker based on the dynamic fetch from the times.conf. The point is that I am updating the times.conf on daily basis at os level with crontab, then refresh the conf-times entity for the time ranges to appear in the picker. This works fine, but with the new ranges coming, I would like also to change the default for the picker in my dashboard (e.g. setting it to last time range event, being last software upgrade for instance). And the last question would be, if there is any way to make these time ranges visible only in my particular dashboard. I have set it on the App-Level, but the app has many dashboards and I do not need them there. It confuses the users. Or at least club them is a separate category like Presets2 (e.g. SoftwareUpgrades).
we are using DB connect addon to get data from Oracle DB  while searching the data was stopped coming but inputs are working while executing in the addon how we can monitor the status data was st... See more...
we are using DB connect addon to get data from Oracle DB  while searching the data was stopped coming but inputs are working while executing in the addon how we can monitor the status data was stopped
Is it possible with ITSI to establish uni-directional ticketing integration with ServiceNow?   the plugin work bidirectional connection i need servicenow pull events from splunk by uni-directional... See more...
Is it possible with ITSI to establish uni-directional ticketing integration with ServiceNow?   the plugin work bidirectional connection i need servicenow pull events from splunk by uni-directional maybe you have a solution?  
We have an on-prem Splunk deployment with 50TB/day ingest, 22PB storage, long term retention (1 year on most indexes). We use Kafka as primary ingest method. We collect cloud logs in Azure using Even... See more...
We have an on-prem Splunk deployment with 50TB/day ingest, 22PB storage, long term retention (1 year on most indexes). We use Kafka as primary ingest method. We collect cloud logs in Azure using EventHub and Kinesis in AWS. We then use Kafka MirrorMaker to bring cloud logs into on-prem Kafka. The volume ratio is currently 95% on-prem vs 5% cloud logs by volume. Inevitably this ratio will change in favor of cloud logs. And the cost to bring cloud logs on-prem will rise. For Azure alone we estimate Microsoft charge $15k/month for 5TB/day data egress. And of course the size of our cloud pipe is an issue. Therefore we are considering an architecture which will avoid backhauling cloud logs. Our stakeholders have made it clear they want a seamless experience - no logging into multiple platforms. Is your organization in a similar position to this or have you overcome this problem?       
Event 1: Product=shirt1 sku=123 sku=234 Event 2: Product=shirt2 sku=987 sku=789   index= store | rex field=_raw max_match=0 "sku\W(?P<sku>.*?)\," |stats count by _time, sku o/p: ... See more...
Event 1: Product=shirt1 sku=123 sku=234 Event 2: Product=shirt2 sku=987 sku=789   index= store | rex field=_raw max_match=0 "sku\W(?P<sku>.*?)\," |stats count by _time, sku o/p: _time sku count 01-04-23 123 1 01-04-23 234 1 01-04-23 987 1 01-04-23 789 1   Output I’m looking for _time count 01-04-23 4
Hey people,   Here is what I am trying to do I have a pie chart created with the data The above pie chart is generated from the following query   ...| table filterExecutionTime ddbWr... See more...
Hey people,   Here is what I am trying to do I have a pie chart created with the data The above pie chart is generated from the following query   ...| table filterExecutionTime ddbWriteExecutionTime buildAndTearDownTime | transpose 0     The pie chart looks stunning, but the only pain point is that to see the values I have to hover on the elements   Instead what I was thinking to make is a legends tab which will show the Names along with values   I was able to create a legend tab, but I couldn't add the values to it   Here is how I did the legends tab   <panel id="panel_legend_2"> <table> <search base="errors2"> <query>|fields column | rename column as "Legends" </query> </search> <format type="color" field="Legends"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel>     Could you please help me out here
Hey people, Here is what I am trying to do: - I have two dashboards, dashboardA & dashboardB - I am sending a token value from dashboardA -> dashboardB - Inside dashboardB, I want to create a... See more...
Hey people, Here is what I am trying to do: - I have two dashboards, dashboardA & dashboardB - I am sending a token value from dashboardA -> dashboardB - Inside dashboardB, I want to create a dynamic dropdown based on token value   This is how my dropdown should look like: - lets say the token value is 10, I want to display choices from 1 to 10 in the dropdown   My thinking was to do something like this:     <input type="dropdown" token="subrack_No"> <choice value="*">All</choice> <search> <query> | makeresults | "some command which generates data from 1 to token value" </query> </search> <default>1</default>       Could you please help me here
Hi, Splunkers,  I  have the following table drilldown earliest and latest time works properly. but when I copied it to a chart drilldown, it stopped working. I noticed table drilldown has the f... See more...
Hi, Splunkers,  I  have the following table drilldown earliest and latest time works properly. but when I copied it to a chart drilldown, it stopped working. I noticed table drilldown has the following drilldown option value= row. <option name="drilldown">row</option> and the link for passing  earliest and latest time uses row.StartDTM_epoch,  and  row.EndDTM_epoch form.field2.earliest=$row.StartDTM_epoch$&form.field2.latest=$row.EndDTM_epoch$   I noticed my chart drilldown has the following drilldown option value= all. <option name="drilldown">all</option> so, I changed it to form.field2.earliest=$all.StartDTM_epoch$&form.field2.latest=$all.EndDTM_epoch$ not sure if the all.StartDTM_epoch and all.EndDTM_epoch casusing the failure.     the following is the related working code for table drilldown to pass earliest and latest time. | eval StartDTM_epoch = relative_time(_time,"-20m") | eval EndDTM_epoch = relative_time(_time,"+20m") | eval TIME = strftime(_time, "%Y-%m-%d %H:%M:%S") | table _time,sid,Type,AgentName,DN,FAddress,Segment,Function,Client,Product,SubFunction,SubFDetail,MKTGCT,CCType,VQ,TLCnt,AFRoute,StateCD,TargetSelected,AFStatus,,CBOffered,CBRejected,AQT,EWT,EWTmin,PIQ,WT,LSInRange,LSPriority,LSRateS,PB,PCSS,PENT,PF,RONA,LANG,StartDTM_epoch,EndDTM_epoch</query> <earliest>$field2.earliest$</earliest> <latest>$field2.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <fields>["_time","sid","Type","AgentName","DN","FAddress","Segment","Function","Client","Product","SubFunction","SubFDetail","MKTGCT","CCType","VQ","TLCnt","AFRoute","StateCD","TargetSelected","AFStatus","CBOffered","CBRejected","AQT","EWT","EWTmin","PIQ","WT","LSInRange","LSPriority","LSRateS","PB","PCSS","PENT","PF","RONA","LANG"]</fields> <drilldown> <condition match="$t_DrillDown$ = &quot;*&quot;"> <link target="_blank"> <![CDATA[ /app/optum_gvp/guciduuidsid_search_applied_rules_with_ors_log_kvp?form.Gucid_token_with2handlers=$click.value2$&form.field2.earliest=$row.StartDTM_epoch$&form.field2.latest=$row.EndDTM_epoch$ ]]> </link>   thx in advance.   Kevin
I have a dashboard with a table with 6 headers.  I would like to bold the text of the second, fourth, and fifth column headers.  I searched around but could only find solutions for the first or last ... See more...
I have a dashboard with a table with 6 headers.  I would like to bold the text of the second, fourth, and fifth column headers.  I searched around but could only find solutions for the first or last column headers (first-child, last-child) but nothing for anything in-between. If bolding isn't an option, I would also be open to simply coloring the background of that column from the default gray to say blue and make the column header text white or something like that.  The idea is to make the 2nd, 4th, and 5th column headers visually standout from the remaining in the table. Example table: index second source fourth fifth count
For a table panel, I would like to create a format rule (<format>) of type="color" that sets a threshold based on another field's data. Here is a sample search (note - "streamstats" and "count_strea... See more...
For a table panel, I would like to create a format rule (<format>) of type="color" that sets a threshold based on another field's data. Here is a sample search (note - "streamstats" and "count_stream" are used just to get sample data):   | makeresults count=3 | streamstats count AS count_stream | eval count_total = count_stream * 10 ,count_metric = count_stream * count_stream * 2 | eval perc_metric = count_metric / count_total * 100 | table perc_metric count_metric, count_total   I would like to create color rules for the "count_metric" field/column, but base it on the "perc_metric" value. My goal is to leverage the percentage values in "perc_metric," and apply color to the cells in "count_metric," hiding the "perc_metric" field/column in the final output. Here's my Simple XML <dashboard> <label>test</label> <row> <panel> <table> <search> <query>| makeresults count=3 | streamstats count AS count_stream | eval count_total = count_stream * 10 ,count_metric = count_stream * count_stream * 2 | eval perc_metric = count_metric / count_total * 100 | table perc_metric count_metric, count_total</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">50</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="perc_metric"> <colorPalette type="list">[#DC4E41,#F1813F,#53A051]</colorPalette> <scale type="threshold">21,41</scale> </format> </table> </panel> </row> </dashboard>
Hi, I have trial account with Splunk Cloud, where I am doing POC on sending the API logs to the SPlunk dashobard. So, fort that I have created HEC token, which is wokring fine with Postman and curl... See more...
Hi, I have trial account with Splunk Cloud, where I am doing POC on sending the API logs to the SPlunk dashobard. So, fort that I have created HEC token, which is wokring fine with Postman and curl command.  But when I am actually trying to use that with-in my react app, to send some data by calling the API with post request, I am getting below error, which I can understand becaus eits trying to access different domain and pslunk instance is not accepting request from other domian. And I am not sure how to fix this, may be is there some setting we can do on instance level to allow traffic from my specific domain ?   Can someone please help,     
Hello, When I was trying to create new connection for my dbConnect, using Connection Type DB2, I am getting error message: "No Suitable Driver is Available" saying no compatible driver found in "dr... See more...
Hello, When I was trying to create new connection for my dbConnect, using Connection Type DB2, I am getting error message: "No Suitable Driver is Available" saying no compatible driver found in "drivers" directory. How would I get the driver installed to resolve this issue? Thank you so much in advance for your support.