All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, Is there a way of extracting a list of EUM appkeys via API from a controller in the same way it is possible to extract the APM application list? eg. "https://<controller_name>.saas.appdynam... See more...
Hi all, Is there a way of extracting a list of EUM appkeys via API from a controller in the same way it is possible to extract the APM application list? eg. "https://<controller_name>.saas.appdynamics.com/controller/rest/applications" I have trawled the doc but found no mention of this... Many thanks in advance! Philippe
I was wondering if any one has successfully onboard KnowBe4 data? I don't see a TA or App on Splunkbase.
Hi Splunk Team, I'm trying to create a query that uses the payment IDs from one table, and only keeps the payment IDs that have a completed status from another table. The completed status can hap... See more...
Hi Splunk Team, I'm trying to create a query that uses the payment IDs from one table, and only keeps the payment IDs that have a completed status from another table. The completed status can happen at a later date so I would like the subsearch to search within 10 days after the original search. My query seems to work when I search for a specific ID in the subsearch, but when I remove it it returns no results. I'm also open to not using a join/making this more efficient but I wasn't sure how else to do it!   auditSource="open-banking" auditType="PaymentResponse" | fields detail.ecospendPaymentId, detail.amount | convert num(detail.amount) as amount | table detail.ecospendPaymentId, amount | join type=inner detail.ecospendPaymentId [ search auditSource="open-banking-external-api" auditType="PaymentStatusUpdate" detail.status="Completed" latest=+10d  | fields detail.paymentId | rename detail.paymentId as "detail.ecospendPaymentId" ] | dedup "detail.ecospendPaymentId" | table "detail.ecospendPaymentId", amount   Thank you!
Hi everyone,  i'm having a little trouble with a Treemap visualization. I'm using Splunk Enterprise v8.2.5 and Treemap is a custom Splunk visualization (I downloaded from splunkbase at this page). ... See more...
Hi everyone,  i'm having a little trouble with a Treemap visualization. I'm using Splunk Enterprise v8.2.5 and Treemap is a custom Splunk visualization (I downloaded from splunkbase at this page). I wanted to create a treemap with a dataset that, after aggregation, looks like the following table: category subcategory size status A a 1 low A b 2 low A c 10 high B a 5 low B b 3 medium B c 4 high C a 1 medium C b 2 high D b 7 low D c 5 high   In this example, the first level of the treemap hierarchy (parent category field) is represented by the field called "category"; the field "subcategory" represents the second level (child category field), the "size" field represents the numerical value by which each rectangle should be sized and the "status" field should set the color of each rectangle.  Here it is a sample XML for a dashboard with a treemap visualization based on some dummy data that looks like the above example:   <dashboard> <label>treemap example</label> <row> <panel> <viz type="treemap_app.treemap"> <search> <query>| makeresults | eval size=1, status="low", category="A", subcategory="a" | append [| makeresults | eval size=2, status="low", category="A", subcategory="b" ] | append [| makeresults | eval size=10, status="high", category="A", subcategory="c" ] | append [| makeresults | eval size=5, status="low", category="B", subcategory="a" ] | append [| makeresults | eval size=3, status="medium", category="B", subcategory="b" ] | append [| makeresults | eval size=4, status="high", category="B", subcategory="c" ] | append [| makeresults | eval size=1, status="medium", category="C", subcategory="a" ] | append [| makeresults | eval size=2, status="high", category="C", subcategory="b" ] | append [| makeresults | eval size=7, status="low", category="D", subcategory="b" ] | append [| makeresults | eval size=5, status="high", category="D", subcategory="c" ] | table category, subcategory, size, status | stats first(size) as size by category, subcategory, status</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="treemap_app.treemap.colorMode">categorical</option> <option name="treemap_app.treemap.maxCategories">10</option> <option name="treemap_app.treemap.maxColor">#dc4e41</option> <option name="treemap_app.treemap.minColor">#53a051</option> <option name="treemap_app.treemap.numOfBins">9</option> <option name="treemap_app.treemap.showLabels">true</option> <option name="treemap_app.treemap.showLegend">true</option> <option name="treemap_app.treemap.showTooltip">true</option> <option name="treemap_app.treemap.useColors">true</option> <option name="treemap_app.treemap.useZoom">true</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> </dashboard>     Below there is a screenshot of the result:   Treemap documentation says to use the following query to set a custom color based on a field different from the parent category field: ... | stats <stats_function>(<metric_field>) <stats_function>(<color_field>) by <parent_category_field> <child_category_field> so first I tried the following query: ... | stats first(size) as size, first(status) as status by category, subcategory but Splunk was returning this error: Error rendering Treemap visualization: Check the Statistics tab. To build a treemap with colors determined by a color field, the results table must include columns representing these four fields: <category>, <name>, <metric>, and <color>. The <color> and <metric> field values must be numeric. So apparently both the metric and color aggregations must be numeric (side note: this is not explained in the documentation). Then I tried this query: ... | stats first(size) as size by category, subcategory, status i.e. I put the "status" field as 3-rd level grouping. This time the visualization seems to work as I intended, i.e. the color of each rectangle is decided by the value of the status field (as seen in the screenshot above). However, it is not possibile to change the default color palette. For my application I would like to set the color using this mapping: green when status="low" yellow when status="medium" red when status="high" So far I was not able to find a way to modify the visualization (through the XML definition) in order to set a custom color mapping. Does anyone know a way to do this?  
again i wanted to list difference in dates between two periods and i have this code | eval LPD = strptime(LastPickupDate, "%m-%d-%Y %H:%M:%S") | eval IInT = strptime(IIT, "%m-%d-%Y %H:%M:%S") | e... See more...
again i wanted to list difference in dates between two periods and i have this code | eval LPD = strptime(LastPickupDate, "%m-%d-%Y %H:%M:%S") | eval IInT = strptime(IIT, "%m-%d-%Y %H:%M:%S") | eval diff = (IInT-LPD)/86400 | stats list(diff) by FacilityName still getting blanks 
Hi everybody, My data is: A = 10, B= 20, C = 30. the fomular that I use is: result = A/(B+C) but I have to verify, the result only displays when 3 values exist, if not (one of them or 3 of them are... See more...
Hi everybody, My data is: A = 10, B= 20, C = 30. the fomular that I use is: result = A/(B+C) but I have to verify, the result only displays when 3 values exist, if not (one of them or 3 of them are null), it displays as "--". here is my command: |eval Result= case(isnotnull(A) AND isnotnull(B) AND isnotnull(C) ,round(A/(B+ C)),1=1, "--") For now, if one of them is null, it displays "--" but when 3 of them are null, it show the text "No result". How can I show it llike "--" for the 2 cases: 1 of them or 3 of them are null, it always show "--" THANKS
How to write a search query for disk partition I/O (as a pie chart) from Unix TA, which is onboarding Linux data. Any help much appreciated. Thank you
Currently we are looking ingesting events that have multiple eventIDs that log in new lines. We want to have those appear as one event in splunk since trying to run a "| transaction event_id" slows o... See more...
Currently we are looking ingesting events that have multiple eventIDs that log in new lines. We want to have those appear as one event in splunk since trying to run a "| transaction event_id" slows our searches down significantly.  It looks like we should be able to use transactiontypes.conf but I am confused on how to get this to work. We are extracting the event_id in props.conf with event_id_test and then have a transactiontypes.conf that is looking to perform a transaction on the fields  event_id_test but so far it is not performing the transaction at all though the event_id_test field is being extracted.  I tried reading through the docs for this but can not see what I am missing or doing wrong based on the splunk docs on this.   props.conf: [test_props] EXTRACT-et = \.\d{3}\:(?P<event_id_test>\d+)   transactiontypes.conf: [test_props] maxspan=5s maxpause=5s fields=event_id_test
I'm trying to pass the result of one query to as input field for another query. Please see the below screen shots and help me out. query1: index=* sourcetype="prod-ecp-aks-" "bookAppointmentReque... See more...
I'm trying to pass the result of one query to as input field for another query. Please see the below screen shots and help me out. query1: index=* sourcetype="prod-ecp-aks-" "bookAppointmentRequest" "Fname" "Lname" | fields data.req.headers.xcorrelationid. It will return the co-relation id.   query 2:  index=*  sourcetype="prod-ecp-aks" "7403cb0a-885d-36ee-0857-fa7e99741bf7" "da_appointment" It will return the appointments for that co-relation id.   I want to combine these two queries and pass that co-relation id. Note:-  The co-relation id's are more than one sometime, I need appointment id's for all the co-relation id's.   I gone through so many links, tried join, subquery but didn't get expected result. Please help me out. Thanks.
Hi folks, I have a deployment of UF >> UF >> Indexers sending default data as sendCookedData = true to splunktcp://9997 port but I'm getting data as --splunk-cooked-mode-v3--  Any idea of what co... See more...
Hi folks, I have a deployment of UF >> UF >> Indexers sending default data as sendCookedData = true to splunktcp://9997 port but I'm getting data as --splunk-cooked-mode-v3--  Any idea of what configuration should I change to not get data in that way? Am I sending cooked data twice?  Thanks.        
Hi at all, I have to take logs from MobileIron Cloud into Splunk Cloud. I download the MobileIron Cloud App, but it is only for Splunk On premise and it doesn't pass the check on Splunk Cloud. ... See more...
Hi at all, I have to take logs from MobileIron Cloud into Splunk Cloud. I download the MobileIron Cloud App, but it is only for Splunk On premise and it doesn't pass the check on Splunk Cloud. Does anybody know if there's a version of this app for Splunk Cloud or where searching a solution? Thanks. Giuseppe
Hello everyone, I'm monitoring my Splunk Enterprise instance and, by looking at splunkd logs both via cli and search through: index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" log_l... See more...
Hello everyone, I'm monitoring my Splunk Enterprise instance and, by looking at splunkd logs both via cli and search through: index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" log_level=ERROR   I find numerous SearchParser errors, namely the following one: ERROR SearchParser [20709 TcpChannelThread] - Missing a search command before '|'. Error at position '2' of search query '| |'.   How can I trace back to the search that generated such error (either the search or the sid is fine)? Is that "20709" something of interest in this scenario?
Hi Team,  Our clients are accidentally clicking the Run option  of saved searches and I can see duplicate events in summary index. I want to disable/remove the Run option from splunk reports/alerts... See more...
Hi Team,  Our clients are accidentally clicking the Run option  of saved searches and I can see duplicate events in summary index. I want to disable/remove the Run option from splunk reports/alerts for user specific. How can I achieve this? Please suggest      
Hi, I need to create a pie chart .... however, the query sets all BOTH categories the SAME colour.     How can I make them different? Thanks
Hi, Should I sent event logs to HttpEventCollector in GZip encoding or only in plain text ?
How many log events can be sent in one http POST command?  Is there a limit?  What is the limit size of the payload.
Good morning All, I am creating timechart, which has to show me top values, the query: When I try to click on particular value, like here: it directs me to all the events of that kind... See more...
Good morning All, I am creating timechart, which has to show me top values, the query: When I try to click on particular value, like here: it directs me to all the events of that kind, not only the max( ) values, which are presented at chart because of the line:  | timechart span=1d max(Priority_diffrence) by risk_object Do you have idea how to solve that? Any hints kindly welcome.
Hi all, I'm trying to create a new input for our created RestAPI-Call. As this call should only be executed once in a month (ticket-data) I struggle with the only available Interval-Option which is... See more...
Hi all, I'm trying to create a new input for our created RestAPI-Call. As this call should only be executed once in a month (ticket-data) I struggle with the only available Interval-Option which is seconds and not crontab-format. I further discovered that when setting interval in seconds having splunk restarted the counter of seconds starts from that point in time somehow that messes with a predicted input interval. Question: Is it possible to use cron-entry in underlaying input.conf file in created AoB-App? Thanks for any feedback Lothar
Hello, I am administrating a distributed environment with 1 Search Head and 10 peers. Something special is that communication is established via a satellite therefore the bandwidth is limited. Se... See more...
Hello, I am administrating a distributed environment with 1 Search Head and 10 peers. Something special is that communication is established via a satellite therefore the bandwidth is limited. Search Head has Splunk Enterprise Security installed and is a deployment server. Peers have the indexer role and all ingest Suricata IDS logs, while only one of them also ingests Windows Logs. I have measured that 3GB per day is the size of data exchanged between Search Head and Indexers, which seems quite a lot to me. Can someone please explain me what kind of data is transferred by default in a distributed environment? Some things to note: 1. Notable index and internal logs are stored locally in Search Head and not forwarded to peers. 2. Replication bundle is 16M Thank you in advance. With kind regards, Chris
I am new to Splunk, search query and return table values , I want change below table into second table format.  convert to table into below format. percentage calculation is sum of 0-5% - Q1 row val... See more...
I am new to Splunk, search query and return table values , I want change below table into second table format.  convert to table into below format. percentage calculation is sum of 0-5% - Q1 row value/ sum of column total. How can achieve this. please help me . Thanks in advance