All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to figure out a way in which i can display the creation time of notable event, the time it was assigned to someone, and then the time the status was set to Closed. I would then like ... See more...
Hi, I am trying to figure out a way in which i can display the creation time of notable event, the time it was assigned to someone, and then the time the status was set to Closed. I would then like to list the time difference between all 3 - it is for SLA purposes in our SOC. Note: When notables are created in my environment, the default status is "New" Seen some examples that produce the mean/average closure time for notables etc, but I am looking for a search that will show it for every notable created (say within the last 24 hours for example) Any help would be much appreciated!  
Hi Splunkers, I have an issue merging two identity lookup files on ES. In particular, my first lookup file has rows like the below:   identity priority email vagn low vag@gmail.com   Th... See more...
Hi Splunkers, I have an issue merging two identity lookup files on ES. In particular, my first lookup file has rows like the below:   identity priority email vagn low vag@gmail.com   The second lookup file looks like the below:   identity priority email vagn critical vag@gmail.com   I would expect that when I run the "| inputlookup append=T identity_lookup_expanded | entitymerge identity " command I would have a result like the below, yet this doesn't happen.   identity priority email vagn critical vag@gmail.com low   Any ideas? I have enabled the multivalue field for the "priority" field already so it can hold more than one value but didn't help.   Regards, Evang  
Hi All, We have request from end user for monitoring a CSV files which are placed in the file share folder and there is no splunk agent running in the file share machine. Example :  Server01  is ... See more...
Hi All, We have request from end user for monitoring a CSV files which are placed in the file share folder and there is no splunk agent running in the file share machine. Example :  Server01  is the actual application which is generating a report and Server02 is the file share machine where the reports are stored and shared with the user.  \\fileshare\power\Powerfile\TO\IAM\Export Files\OSBD - Terminated Users List.csv  --  Location of the file to be monitored in splunk.   Above mentioned path has required permission to access the file from the share drive In Server01 we have splunk UF agent running and inputs.conf configured for monitoring the log files present in the server.   Question:  Can we use the same app which is present in the server01 to monitor the file present in the server02 as it has the required permission to access the file from that server. Stanza in inputs.conf:  [monitor://\fileshare\power\Powerfile\TO\IAM\Export Files\OSBD - Terminated Users List.csv] sourcetype = powerfile:power:osbd_terminateduser index = indexname disabled = 0 ignoreOlderThan = 14d kindly guide me how to get this share folder to be monitored in splunk.
I have a raw where each event looks like this (simplified for this exampel): {"time": "2022-01-20 16:40:02.325216", "name": "name1", "deployment": "found", "secret": "correct"} If "deployment": "... See more...
I have a raw where each event looks like this (simplified for this exampel): {"time": "2022-01-20 16:40:02.325216", "name": "name1", "deployment": "found", "secret": "correct"} If "deployment": "not_found", I would like to have a table like: time name deployment 2022-01-20 16:40:02.325216 name1 not_found If "secret": "incorrect", I would like to have a table like: time name secret 2022-01-20 16:40:02.325216 name1 incorrect   Currently, my search looks like this:   index=index host=host source=source ("not_found" OR "incorrect") | table time name deployment secret   But this means that both fields (deployment and secret) will be shown no matter what their value is. @Ayn Is there a way to have a table which varies its fields depending on a certain condition? Thanks in advance! 
Hello everyone, I have read the documentation about exporting Splunk ES content as an app: https://docs.splunk.com/Documentation/ES/7.0.0/Admin/Export  but the objects available I have to export a... See more...
Hello everyone, I have read the documentation about exporting Splunk ES content as an app: https://docs.splunk.com/Documentation/ES/7.0.0/Admin/Export  but the objects available I have to export are more than 250 that the dropdown allows me to select. I would like to move ES app to another server with it's settings, custom menu, altered dashboards, datamodels etc included. Is there a way to export it? Thank you in advance. Chris
What app and add-on can best work with logs from imprivata.? Can Cisco Networks Add-on for Splunk Enterprise work? Has anyone with experience on this? [syslog/imprivata/*] host=imprivata sourcetyp... See more...
What app and add-on can best work with logs from imprivata.? Can Cisco Networks Add-on for Splunk Enterprise work? Has anyone with experience on this? [syslog/imprivata/*] host=imprivata sourcetype=imprivata index=imprivata disabled = false # ignoreOlderThan = 30 Read below "I need some help making sure we are getting logs from the Cisco AP and we need indexes created HF and SH. Also an parsing app for the Cisco AP logs. "
Hi all, I'm wondering how to use the icons and styles in this page: http://127.0.0.1:8000/en-US/static/docs/style/style-guide.html For example, where to find the code for using the accordion table... See more...
Hi all, I'm wondering how to use the icons and styles in this page: http://127.0.0.1:8000/en-US/static/docs/style/style-guide.html For example, where to find the code for using the accordion table? I don't want to use js or css, only what in this page of Splunk . Regards,  
Hi there, i'm a new splunk user and try to use the new Dashboard Studio variant of dashboards like the last example described here: https://docs.splunk.com/Documentation/Splunk/8.2.4/DashStudio/inp... See more...
Hi there, i'm a new splunk user and try to use the new Dashboard Studio variant of dashboards like the last example described here: https://docs.splunk.com/Documentation/Splunk/8.2.4/DashStudio/inputs#Example:_Search-based_cascading_inputs My Problem is the values for the dynamic multiselect input have whitespaces in it and as soon as i use the "IN" operator in the search query this retruns no entries. If i manually change the search query and put all the values in quotes it is working as expected. Is there any way to do this in the definition of the input? I can also append a        eval appDisplayName = \"\\\"\".appDisplayName.\"\\\"\"       to the ds.search query but this also adds the quotes to the display portion.   My complete json looks like this:       { "visualizations": { "viz_hSyaQ4tf": { "type": "splunk.table", "options": {}, "dataSources": { "primary": "ds_saMdKSzT" } } }, "dataSources": { "ds_saMdKSzT": { "type": "ds.search", "options": { "query": "sourcetype=\"azure:aad:signin\" userPrincipalName=$userPrincipalName$ AND appDisplayName IN ($appDisplayName$) | table createdDateTime userPrincipalName userId appDisplayName appId resourceDisplayName resourceId conditionalAccessStatus status.errorCode", "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } }, "name": "SignIns" }, "ds_XdUxasDT": { "type": "ds.search", "options": { "query": "sourcetype=\"azure:aad:signin\" | stats count by userPrincipalName", "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } }, "name": "userPrincipalName-stats" }, "ds_GQslD2fp": { "type": "ds.search", "options": { "query": "sourcetype=\"azure:aad:signin\" userPrincipalName=$userPrincipalName$ | stats count by appDisplayName", "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } }, "name": "appDisplayName-stats" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_hcQWlw8q": { "title": "Select App", "type": "input.multiselect", "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "appDisplayName" }, "dataSources": { "primary": "ds_GQslD2fp" }, "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [ [ "All" ], [ "*" ] ], "label": ">primary | seriesByName(\"appDisplayName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"appDisplayName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_E26xAMU9": { "options": { "defaultValue": "user@domain.com", "token": "userPrincipalName" }, "title": "Select User", "type": "input.text" } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_hSyaQ4tf", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_E26xAMU9", "input_hcQWlw8q" ] }, "description": "", "title": "Azure AD SignIns" }         This produces the not working query like this:       sourcetype="azure:aad:signin" userPrincipalName=bauera@herrenknecht.com AND appDisplayName IN (Microsoft Office 365 Portal,Windows Sign In,Office365 Shell WCSS-Client) | table createdDateTime userPrincipalName userId appDisplayName appId resourceDisplayName resourceId conditionalAccessStatus status.errorCode        I want it to be like this:       sourcetype="azure:aad:signin" userPrincipalName=bauera@herrenknecht.com AND appDisplayName IN ("Microsoft Office 365 Portal","Windows Sign In","Office365 Shell WCSS-Client") | table createdDateTime userPrincipalName userId appDisplayName appId resourceDisplayName resourceId conditionalAccessStatus status.errorCode         Thanks for your help.   Greetings Andreas
I have a JSON with a field containing another object, but this object varies depending on type. For example, you may have these 3 logs under the same sourcetype/index: { "Log":"something","user": "m... See more...
I have a JSON with a field containing another object, but this object varies depending on type. For example, you may have these 3 logs under the same sourcetype/index: { "Log":"something","user": "me" ,"type":"car", "data": {"case1":"something"} } { "Log":"something","user": "me" ,"type":"apple", "data": {"fruity":"yummy"} } { "Log":"something","user": "me","type":"Cauliflower", "data":{"veggie":"eww", "fact":"good for you"} } and I want a table query to look something like this: user | data me    | {"case1":"something"}  me    | {"fruity":"yummy"} me    | {"veggie":"eww", "fact":"good for you"} I tried the following query: index=mylog | table user,data but my results usually look like this (with either nulls or straight up empty): user | data me    | null me    | me    | null data itself may sometimes be very long, but I would still like to see its entire output in the table. How can I go about this?
I was able to find the date when the correlation search was last updated, but cant seem to find the original creation date of a correlation search. 
Hello, I upload to splunk a csv with list of names (only one column) and I wand to add additional names to the csv. how can I do that? 
Is Type=Left the same as type=outer in Splunk?   If so why do they list it as three options? https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchReference/Join type Syntax: type=inner | oute... See more...
Is Type=Left the same as type=outer in Splunk?   If so why do they list it as three options? https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchReference/Join type Syntax: type=inner | outer | left Description: Indicates the type of join to perform. The difference between an inner and a left (or outer) join is how the events are treated in the main search that do not match any of the events in the subsearch. In both inner and left joins, events that match are joined. The results of an inner join do not include events from the main search that have no matches in the subsearch. The results of a left (or outer) join includes all of the events in the main search and only those values in the subsearch have matching field values. Default: inner  
hi Form my first panel, when I click on a row I want to display the results of the row Actually it opens the details for all row and not for a specific wrong What is wrong please?      <row> ... See more...
hi Form my first panel, when I click on a row I want to display the results of the row Actually it opens the details for all row and not for a specific wrong What is wrong please?      <row> <panel> <table> <title>Bureau : $Site$</title> <search base="sante"> <query>| stats count as "Nombre de lenteurs" by name | rename name as Nom | sort - "Nombre de lenteurs"</query> </search> <option name="drilldown">row</option> <format type="color" field="Nombre de lenteurs"> <colorPalette type="minMidMax" maxColor="#DC4E41" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format> <drilldown> <set token="name">$click.value$</set> </drilldown> </table> </panel> <panel depends="$name$"> <table> <title>Bureau : $Site$</title> <search base="sante"> <query>| stats count(web_app_duration_avg_ms) as "Nb lenteurs Web" count(hang_process_name) as "Nb hang", count(crash_process_name) as "Nb crash" by name | rename name as Nom</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>      
Hello there,  i've a report that is scheduled as follows: * * * * *  But in the next scheduled time i got 2022-01-20 11:53:40 CET but i want 2022-01-20 11:53:00 CET   Is there a way to set second... See more...
Hello there,  i've a report that is scheduled as follows: * * * * *  But in the next scheduled time i got 2022-01-20 11:53:40 CET but i want 2022-01-20 11:53:00 CET   Is there a way to set seconds?   TY
I have a problem when I set DLTK containers. I chose Golden Image CPU (3.7) from the list and I already pull phdrieger/mltk-container-golden-image-cpu:3.7.0 to the local docker, but always have the e... See more...
I have a problem when I set DLTK containers. I chose Golden Image CPU (3.7) from the list and I already pull phdrieger/mltk-container-golden-image-cpu:3.7.0 to the local docker, but always have the error say [list index out of range] Can someone help me, that would be great  
I have created a bar graph. The following is the query. index= "cx_metrics_analysis" sourcetype="cx_metrics_httpevent" | eval duration=floor((TASK_DURATION)/3600000)| bin duration span=2s|chart dis... See more...
I have created a bar graph. The following is the query. index= "cx_metrics_analysis" sourcetype="cx_metrics_httpevent" | eval duration=floor((TASK_DURATION)/3600000)| bin duration span=2s|chart distinct_count(TASK_NUM) as "Tasks" by duration | bin duration span=2 Since the bar graph is having a lot of values in x axis i'm trying to limit the values. I'm trying to group the values into 3. One which has duration less than 15, second one having duration between 15 to 25 and last one having duration greater than 25. | eval red = if(duration>25,duration,0) | eval yellow = if(duration<=25 AND duration>15,duration,0) | eval green = if(duration<=15, duration, 0) Is this the correct method to do this? Anyone knows how to solve this?
I know this can be done in the classic dashboard but is there a way to provide the tooltip/ hover functionality when using Dashboard Studio? 
Is there an option to add  Header & Footer with jpg in scheduled report  ? 
Hi Team, i want to configure an mail alert when the status code is 400,401, 500... which means other than 200 trigger the alert. check every 30 min once.  
Hi Splunkers, I am experiencing issues with an index cluster and it would be great if you could help me out. Every time I change or create an index a restart is required and it takes up to an hour u... See more...
Hi Splunkers, I am experiencing issues with an index cluster and it would be great if you could help me out. Every time I change or create an index a restart is required and it takes up to an hour until all the indexers are ready again. This used to work without a restart and only started happening after an upgrade at some point. I found this, but that doesn't say anything about creating indexes. Do you have an idea where this is coming from exactly and if it can be avoided in some way? Since changes are made weekly, it is really annoying.