All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  Can you please help me understand if Google Workspace Add-on equivalent update for G suite for Splunk add-on? Because, we used g suite earlier, after seeing that the app had been updated we insta... See more...
  Can you please help me understand if Google Workspace Add-on equivalent update for G suite for Splunk add-on? Because, we used g suite earlier, after seeing that the app had been updated we installed and configured Google Workspaces. But, sourcetypes and the way events are parsed are not similar to gsuite. Thanks in advance
Pretty much the title. I tried messing with the user interface navigation settings and the closest I can get is making the glass table lister the default page. But this also alters the user interfac... See more...
Pretty much the title. I tried messing with the user interface navigation settings and the closest I can get is making the glass table lister the default page. But this also alters the user interface like so:   This would be a separate issue. My main concern is making a specific glass table the default page when opening ITSI. Any help would be greatly appreciated.
Hello, Below is the existing stanza in the inputs.conf [monitor:///var/log] whitelist=(\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist=(lastlog|anaconda\.syslog) disabled = 1... See more...
Hello, Below is the existing stanza in the inputs.conf [monitor:///var/log] whitelist=(\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist=(lastlog|anaconda\.syslog) disabled = 1 I also want to add the following folder to be blacklist /var/log/dynatrace and any logs within the folder/sub folders. Can you please explain how this can be done? Is the syntax below correct? blacklist=(lastlog|anaconda\.syslog)|(dynatrace) Appreciate your experience and help.
The data i have is  816851-567-7554080981706881 50A720 -123-8150015922249983 816851-567-1135131573613120 816851-567-0065137870504409 50A720 -123-1135131573613120 816851-567-0065137870504409 ... See more...
The data i have is  816851-567-7554080981706881 50A720 -123-8150015922249983 816851-567-1135131573613120 816851-567-0065137870504409 50A720 -123-1135131573613120 816851-567-0065137870504409 50A720 -123-1135131573613120 50A720 -123-0065137870504409 I want to extract 50A720 or 816851using | rex field=name  mode=sed "s/816851/" getting error Error in 'rex' command: Failed to initialize sed. Failed to parse the replacement string.
I'm interested in suggestions on how to tackle this. I know how I would implement it in Python, but not really sure best practice for SOAR. Let's say I have an Action called "Lookup Host" If it ... See more...
I'm interested in suggestions on how to tackle this. I know how I would implement it in Python, but not really sure best practice for SOAR. Let's say I have an Action called "Lookup Host" If it runs successfully, it returns a dict with some data [{"hostname": "test1", "device_id": "abc123"}] but we might actually not have data on this host, so it will return empty: [] I need to ensure that we have data, otherwise later playbook actions won't complete. Would we use a decision here - like "If result != []: continue, else: exit playbook" Here's is loosely what I want to do, but in Python Code:     result = LookupHost(hostname="test1") if result: # Have a result, so can continue run_second_action() else: # no data found, exit exit(0)    
I'm trying to centralize our app information on our HFs. Each HF has the following scheduled search set up: | rest /services/apps/local | search disabled=0 | table splunk_server label title versio... See more...
I'm trying to centralize our app information on our HFs. Each HF has the following scheduled search set up: | rest /services/apps/local | search disabled=0 | table splunk_server label title version update.version check_for_updates | collect index="meta_apps" The index "meta_apps" exists on the HF and on Splunk Cloud. However, I don't see these results in Splunk Cloud. What am I missing?
Hi, My team and I are currently developing a website which needs to pull data from Splunk and insert it into visualizations on the home page on the website. As the title suggests, we are currently ... See more...
Hi, My team and I are currently developing a website which needs to pull data from Splunk and insert it into visualizations on the home page on the website. As the title suggests, we are currently using React and NodeJS and due to our absolute lack of Splunk experience we are a bit bogged down so please forgive me if this is a potentially dumb question.  We are trying to use the Splunk JavaScript SDK in Node to establish a connection and pull data from Splunk. we have tried absolutely everything  at this point but cannot establish a connection and perform a simple service.login through the SDK. We have tried this with postman and it appears to be working just fine on that side of things. For Example:  We have tried using the code from Server Side Javascript but when running it, it throws the following error:         throw err; ^ { response: { headers: {}, statusCode: 600 }, status: 600, data: undefined, error: Error: connect ECONNREFUSED ::1:8089 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1195:16) { errno: -4078, code: 'ECONNREFUSED', syscall: 'connect', address: '::1', port: 8089 } } Node.js v17.9.0                OR this error :                throw err; ^ { response: { headers: {}, statusCode: 600 }, status: 600, data: undefined, error: Error: write EPROTO 04490000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:c:\ws\deps\openssl\openssl\ssl\record\ssl3_record.c:355: at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16) { errno: -4046, code: 'EPROTO', syscall: 'write' } } Node.js v17.9.0                 Can anyone please help? Any help would be greatly appreciated
I feel I'm getting lost in the sauce. I'm working on creating a props.conf for some syslog data on ingest (not search time) where the syslog message has it's standard syslog content, and then my mess... See more...
I feel I'm getting lost in the sauce. I'm working on creating a props.conf for some syslog data on ingest (not search time) where the syslog message has it's standard syslog content, and then my message will start off with a statement followed by colon delimited fields with new lines. So like this message below. NOTE: the bold "normal" text changes depending on the message type, so this part is dynamic. <priority>timestamp data1 data2 this is a normal message: key:val key1:val1 key2:val2 --- key_n:val_n   So I want to parse the the first line and pull different values from the syslog message, and then after that just use a delimiter so I don't have to specify each field (because there are a lot of fields, up to 50 different key:value lines). First, not sure how to specify to Splunk: parse line 1 one way, and then use a delimiter on every other line. I'm sure there is a way? I've looked into attributes for structured data here. I want to treat the first line almost like a header (different from the rest of the log), but not like the FIELD_HEADER properties as this isn't a delimited header I'm attempting to extract (like csv data). How can I parse just the first line of my syslog (probably with some regex to grab everything appropriately), and then for the rest of my content use the delimiter? Maybe I could use FIELD_DELIMITER=: ? Additionally, I'm thinking I might have to use the transforms DELIM property here. I'm thinking something like this: DELIMS = "\r\n", ":"
Hi all, Is there a way of extracting a list of EUM appkeys via API from a controller in the same way it is possible to extract the APM application list? eg. "https://<controller_name>.saas.appdynam... See more...
Hi all, Is there a way of extracting a list of EUM appkeys via API from a controller in the same way it is possible to extract the APM application list? eg. "https://<controller_name>.saas.appdynamics.com/controller/rest/applications" I have trawled the doc but found no mention of this... Many thanks in advance! Philippe
I was wondering if any one has successfully onboard KnowBe4 data? I don't see a TA or App on Splunkbase.
Hi Splunk Team, I'm trying to create a query that uses the payment IDs from one table, and only keeps the payment IDs that have a completed status from another table. The completed status can hap... See more...
Hi Splunk Team, I'm trying to create a query that uses the payment IDs from one table, and only keeps the payment IDs that have a completed status from another table. The completed status can happen at a later date so I would like the subsearch to search within 10 days after the original search. My query seems to work when I search for a specific ID in the subsearch, but when I remove it it returns no results. I'm also open to not using a join/making this more efficient but I wasn't sure how else to do it!   auditSource="open-banking" auditType="PaymentResponse" | fields detail.ecospendPaymentId, detail.amount | convert num(detail.amount) as amount | table detail.ecospendPaymentId, amount | join type=inner detail.ecospendPaymentId [ search auditSource="open-banking-external-api" auditType="PaymentStatusUpdate" detail.status="Completed" latest=+10d  | fields detail.paymentId | rename detail.paymentId as "detail.ecospendPaymentId" ] | dedup "detail.ecospendPaymentId" | table "detail.ecospendPaymentId", amount   Thank you!
Hi everyone,  i'm having a little trouble with a Treemap visualization. I'm using Splunk Enterprise v8.2.5 and Treemap is a custom Splunk visualization (I downloaded from splunkbase at this page). ... See more...
Hi everyone,  i'm having a little trouble with a Treemap visualization. I'm using Splunk Enterprise v8.2.5 and Treemap is a custom Splunk visualization (I downloaded from splunkbase at this page). I wanted to create a treemap with a dataset that, after aggregation, looks like the following table: category subcategory size status A a 1 low A b 2 low A c 10 high B a 5 low B b 3 medium B c 4 high C a 1 medium C b 2 high D b 7 low D c 5 high   In this example, the first level of the treemap hierarchy (parent category field) is represented by the field called "category"; the field "subcategory" represents the second level (child category field), the "size" field represents the numerical value by which each rectangle should be sized and the "status" field should set the color of each rectangle.  Here it is a sample XML for a dashboard with a treemap visualization based on some dummy data that looks like the above example:   <dashboard> <label>treemap example</label> <row> <panel> <viz type="treemap_app.treemap"> <search> <query>| makeresults | eval size=1, status="low", category="A", subcategory="a" | append [| makeresults | eval size=2, status="low", category="A", subcategory="b" ] | append [| makeresults | eval size=10, status="high", category="A", subcategory="c" ] | append [| makeresults | eval size=5, status="low", category="B", subcategory="a" ] | append [| makeresults | eval size=3, status="medium", category="B", subcategory="b" ] | append [| makeresults | eval size=4, status="high", category="B", subcategory="c" ] | append [| makeresults | eval size=1, status="medium", category="C", subcategory="a" ] | append [| makeresults | eval size=2, status="high", category="C", subcategory="b" ] | append [| makeresults | eval size=7, status="low", category="D", subcategory="b" ] | append [| makeresults | eval size=5, status="high", category="D", subcategory="c" ] | table category, subcategory, size, status | stats first(size) as size by category, subcategory, status</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="treemap_app.treemap.colorMode">categorical</option> <option name="treemap_app.treemap.maxCategories">10</option> <option name="treemap_app.treemap.maxColor">#dc4e41</option> <option name="treemap_app.treemap.minColor">#53a051</option> <option name="treemap_app.treemap.numOfBins">9</option> <option name="treemap_app.treemap.showLabels">true</option> <option name="treemap_app.treemap.showLegend">true</option> <option name="treemap_app.treemap.showTooltip">true</option> <option name="treemap_app.treemap.useColors">true</option> <option name="treemap_app.treemap.useZoom">true</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> </dashboard>     Below there is a screenshot of the result:   Treemap documentation says to use the following query to set a custom color based on a field different from the parent category field: ... | stats <stats_function>(<metric_field>) <stats_function>(<color_field>) by <parent_category_field> <child_category_field> so first I tried the following query: ... | stats first(size) as size, first(status) as status by category, subcategory but Splunk was returning this error: Error rendering Treemap visualization: Check the Statistics tab. To build a treemap with colors determined by a color field, the results table must include columns representing these four fields: <category>, <name>, <metric>, and <color>. The <color> and <metric> field values must be numeric. So apparently both the metric and color aggregations must be numeric (side note: this is not explained in the documentation). Then I tried this query: ... | stats first(size) as size by category, subcategory, status i.e. I put the "status" field as 3-rd level grouping. This time the visualization seems to work as I intended, i.e. the color of each rectangle is decided by the value of the status field (as seen in the screenshot above). However, it is not possibile to change the default color palette. For my application I would like to set the color using this mapping: green when status="low" yellow when status="medium" red when status="high" So far I was not able to find a way to modify the visualization (through the XML definition) in order to set a custom color mapping. Does anyone know a way to do this?  
again i wanted to list difference in dates between two periods and i have this code | eval LPD = strptime(LastPickupDate, "%m-%d-%Y %H:%M:%S") | eval IInT = strptime(IIT, "%m-%d-%Y %H:%M:%S") | e... See more...
again i wanted to list difference in dates between two periods and i have this code | eval LPD = strptime(LastPickupDate, "%m-%d-%Y %H:%M:%S") | eval IInT = strptime(IIT, "%m-%d-%Y %H:%M:%S") | eval diff = (IInT-LPD)/86400 | stats list(diff) by FacilityName still getting blanks 
Hi everybody, My data is: A = 10, B= 20, C = 30. the fomular that I use is: result = A/(B+C) but I have to verify, the result only displays when 3 values exist, if not (one of them or 3 of them are... See more...
Hi everybody, My data is: A = 10, B= 20, C = 30. the fomular that I use is: result = A/(B+C) but I have to verify, the result only displays when 3 values exist, if not (one of them or 3 of them are null), it displays as "--". here is my command: |eval Result= case(isnotnull(A) AND isnotnull(B) AND isnotnull(C) ,round(A/(B+ C)),1=1, "--") For now, if one of them is null, it displays "--" but when 3 of them are null, it show the text "No result". How can I show it llike "--" for the 2 cases: 1 of them or 3 of them are null, it always show "--" THANKS
How to write a search query for disk partition I/O (as a pie chart) from Unix TA, which is onboarding Linux data. Any help much appreciated. Thank you
Currently we are looking ingesting events that have multiple eventIDs that log in new lines. We want to have those appear as one event in splunk since trying to run a "| transaction event_id" slows o... See more...
Currently we are looking ingesting events that have multiple eventIDs that log in new lines. We want to have those appear as one event in splunk since trying to run a "| transaction event_id" slows our searches down significantly.  It looks like we should be able to use transactiontypes.conf but I am confused on how to get this to work. We are extracting the event_id in props.conf with event_id_test and then have a transactiontypes.conf that is looking to perform a transaction on the fields  event_id_test but so far it is not performing the transaction at all though the event_id_test field is being extracted.  I tried reading through the docs for this but can not see what I am missing or doing wrong based on the splunk docs on this.   props.conf: [test_props] EXTRACT-et = \.\d{3}\:(?P<event_id_test>\d+)   transactiontypes.conf: [test_props] maxspan=5s maxpause=5s fields=event_id_test
I'm trying to pass the result of one query to as input field for another query. Please see the below screen shots and help me out. query1: index=* sourcetype="prod-ecp-aks-" "bookAppointmentReque... See more...
I'm trying to pass the result of one query to as input field for another query. Please see the below screen shots and help me out. query1: index=* sourcetype="prod-ecp-aks-" "bookAppointmentRequest" "Fname" "Lname" | fields data.req.headers.xcorrelationid. It will return the co-relation id.   query 2:  index=*  sourcetype="prod-ecp-aks" "7403cb0a-885d-36ee-0857-fa7e99741bf7" "da_appointment" It will return the appointments for that co-relation id.   I want to combine these two queries and pass that co-relation id. Note:-  The co-relation id's are more than one sometime, I need appointment id's for all the co-relation id's.   I gone through so many links, tried join, subquery but didn't get expected result. Please help me out. Thanks.
Hi folks, I have a deployment of UF >> UF >> Indexers sending default data as sendCookedData = true to splunktcp://9997 port but I'm getting data as --splunk-cooked-mode-v3--  Any idea of what co... See more...
Hi folks, I have a deployment of UF >> UF >> Indexers sending default data as sendCookedData = true to splunktcp://9997 port but I'm getting data as --splunk-cooked-mode-v3--  Any idea of what configuration should I change to not get data in that way? Am I sending cooked data twice?  Thanks.        
Hi at all, I have to take logs from MobileIron Cloud into Splunk Cloud. I download the MobileIron Cloud App, but it is only for Splunk On premise and it doesn't pass the check on Splunk Cloud. ... See more...
Hi at all, I have to take logs from MobileIron Cloud into Splunk Cloud. I download the MobileIron Cloud App, but it is only for Splunk On premise and it doesn't pass the check on Splunk Cloud. Does anybody know if there's a version of this app for Splunk Cloud or where searching a solution? Thanks. Giuseppe
Hello everyone, I'm monitoring my Splunk Enterprise instance and, by looking at splunkd logs both via cli and search through: index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" log_l... See more...
Hello everyone, I'm monitoring my Splunk Enterprise instance and, by looking at splunkd logs both via cli and search through: index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" log_level=ERROR   I find numerous SearchParser errors, namely the following one: ERROR SearchParser [20709 TcpChannelThread] - Missing a search command before '|'. Error at position '2' of search query '| |'.   How can I trace back to the search that generated such error (either the search or the sid is fine)? Is that "20709" something of interest in this scenario?