All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've been trying to resolve this since October and not getting traction.  Turning to the community for help: I have seemingly contradictory information within the same log line makes me question- do... See more...
I've been trying to resolve this since October and not getting traction.  Turning to the community for help: I have seemingly contradictory information within the same log line makes me question- do we have an issue or not?   On the one hand, i think i do because the history command shows the search is cancelled... and I trust this information.  However, there are artifacts in the logs that make me question if the search is fully running (which appears to be true since "fully_completed_search=TRUE"... so I am now confused if we have a problem or not.) Why do searches show fully_completed_search=TRUE and has_error_warn=FALSE when the info field (and history command) show "cancelled" and have a tag of "error"   BOTTOM LINE QUESTION: Are my searches are running correctly and returning all results or not?    Sample _audit log search activity that I found - not sure if this gives any usable insight Audit:[timestamp=10-01-2021 16:31:40.338, user=redacted_user, action=search, info=canceled, search_id='1633105804.108286', has_error_warn=false, fully_completed_search=true, total_run_time=18.13, event_count=0, result_count=0, available_count=0, scan_count=133645, drop_count=0, exec_time=1633105804, api_et=1633104900.000000000, api_lt=1633105800.000000000, api_index_et=N/A, api_index_lt=N/A, search_et=1633104900.000000000, search_lt=1633105800.000000000, is_realtime=0, savedsearch_name="", search_startup_time="1270", is_prjob=false, acceleration_id="98DCBC55-D36C-4671-93CD-1A950D796EC4_search_redacted_user_311d202b50b71a64", app="search", provenance="N/A", mode="historical_batch", workload_pool=standard_perf, is_proxied=false, searched_buckets=53, eliminated_buckets=0, considered_events=133645, total_slices=331408, decompressed_slices=11305, duration.command.search.index=120, invocations.command.search.index.bucketcache.hit=53, duration.command.search.index.bucketcache.hit=0, invocations.command.search.index.bucketcache.miss=0, duration.command.search.index.bucketcache.miss=0, invocations.command.search.index.bucketcache.error=0, duration.command.search.rawdata=2533, invocations.command.search.rawdata.bucketcache.hit=0, duration.command.search.rawdata.bucketcache.hit=0, invocations.command.search.rawdata.bucketcache.miss=0, duration.command.search.rawdata.bucketcache.miss=0, invocations.command.search.rawdata.bucketcache.error=0, roles='redacted', search='search index=oswinsec (EventID=7036 OR EventID=50 OR EventID=56 OR EventID=1000 OR EventID=1001) | eval my_ts2 = _time*1000 | eval indextime=_indextime |table my_ts2,EventID | rename EventID as EventCode']
We are utilizing a deployment server to push out UF agent config to our Citrix VM, however, not all devices are reporting into Deployment server---they are showing in Splunk cloud as all devices send... See more...
We are utilizing a deployment server to push out UF agent config to our Citrix VM, however, not all devices are reporting into Deployment server---they are showing in Splunk cloud as all devices sending data   In working with support they suggested to rename  /opt/splunkforwarder/etc/instance.cfg to backup_instances.cfg (or something similar) This seems to work for the device to register, however, this will be overwritten when a new master image push out is done Has anyone encountered this before and what steps have you used to monitor devices on DS Thanks, Jeff My engineering team uses the following script for Splunk config Seal Script # generalize splunk Stop-Process -InputObject $p -Force Start-Sleep -Seconds 3 if (get-service -Name SplunkForwarder | Where status -eq "stopped")     {         write-host "Splunk Service Stopped..."     } $Host.PrivateData.VerboseForegroundColor = 'Yellow' start-process -nonewwindow -filepath "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" -argumentlist 'clone-prep-clear-config' -wait -verbose write-host "Splunk Machine ID removed..."
Hi, I am trying to figure out a way in which i can display the creation time of notable event, the time it was assigned to someone, and then the time the status was set to Closed. I would then like ... See more...
Hi, I am trying to figure out a way in which i can display the creation time of notable event, the time it was assigned to someone, and then the time the status was set to Closed. I would then like to list the time difference between all 3 - it is for SLA purposes in our SOC. Note: When notables are created in my environment, the default status is "New" Seen some examples that produce the mean/average closure time for notables etc, but I am looking for a search that will show it for every notable created (say within the last 24 hours for example) Any help would be much appreciated!  
Hi Splunkers, I have an issue merging two identity lookup files on ES. In particular, my first lookup file has rows like the below:   identity priority email vagn low vag@gmail.com   Th... See more...
Hi Splunkers, I have an issue merging two identity lookup files on ES. In particular, my first lookup file has rows like the below:   identity priority email vagn low vag@gmail.com   The second lookup file looks like the below:   identity priority email vagn critical vag@gmail.com   I would expect that when I run the "| inputlookup append=T identity_lookup_expanded | entitymerge identity " command I would have a result like the below, yet this doesn't happen.   identity priority email vagn critical vag@gmail.com low   Any ideas? I have enabled the multivalue field for the "priority" field already so it can hold more than one value but didn't help.   Regards, Evang  
Hi All, We have request from end user for monitoring a CSV files which are placed in the file share folder and there is no splunk agent running in the file share machine. Example :  Server01  is ... See more...
Hi All, We have request from end user for monitoring a CSV files which are placed in the file share folder and there is no splunk agent running in the file share machine. Example :  Server01  is the actual application which is generating a report and Server02 is the file share machine where the reports are stored and shared with the user.  \\fileshare\power\Powerfile\TO\IAM\Export Files\OSBD - Terminated Users List.csv  --  Location of the file to be monitored in splunk.   Above mentioned path has required permission to access the file from the share drive In Server01 we have splunk UF agent running and inputs.conf configured for monitoring the log files present in the server.   Question:  Can we use the same app which is present in the server01 to monitor the file present in the server02 as it has the required permission to access the file from that server. Stanza in inputs.conf:  [monitor://\fileshare\power\Powerfile\TO\IAM\Export Files\OSBD - Terminated Users List.csv] sourcetype = powerfile:power:osbd_terminateduser index = indexname disabled = 0 ignoreOlderThan = 14d kindly guide me how to get this share folder to be monitored in splunk.
I have a raw where each event looks like this (simplified for this exampel): {"time": "2022-01-20 16:40:02.325216", "name": "name1", "deployment": "found", "secret": "correct"} If "deployment": "... See more...
I have a raw where each event looks like this (simplified for this exampel): {"time": "2022-01-20 16:40:02.325216", "name": "name1", "deployment": "found", "secret": "correct"} If "deployment": "not_found", I would like to have a table like: time name deployment 2022-01-20 16:40:02.325216 name1 not_found If "secret": "incorrect", I would like to have a table like: time name secret 2022-01-20 16:40:02.325216 name1 incorrect   Currently, my search looks like this:   index=index host=host source=source ("not_found" OR "incorrect") | table time name deployment secret   But this means that both fields (deployment and secret) will be shown no matter what their value is. @Ayn Is there a way to have a table which varies its fields depending on a certain condition? Thanks in advance! 
Hello everyone, I have read the documentation about exporting Splunk ES content as an app: https://docs.splunk.com/Documentation/ES/7.0.0/Admin/Export  but the objects available I have to export a... See more...
Hello everyone, I have read the documentation about exporting Splunk ES content as an app: https://docs.splunk.com/Documentation/ES/7.0.0/Admin/Export  but the objects available I have to export are more than 250 that the dropdown allows me to select. I would like to move ES app to another server with it's settings, custom menu, altered dashboards, datamodels etc included. Is there a way to export it? Thank you in advance. Chris
What app and add-on can best work with logs from imprivata.? Can Cisco Networks Add-on for Splunk Enterprise work? Has anyone with experience on this? [syslog/imprivata/*] host=imprivata sourcetyp... See more...
What app and add-on can best work with logs from imprivata.? Can Cisco Networks Add-on for Splunk Enterprise work? Has anyone with experience on this? [syslog/imprivata/*] host=imprivata sourcetype=imprivata index=imprivata disabled = false # ignoreOlderThan = 30 Read below "I need some help making sure we are getting logs from the Cisco AP and we need indexes created HF and SH. Also an parsing app for the Cisco AP logs. "
Hi all, I'm wondering how to use the icons and styles in this page: http://127.0.0.1:8000/en-US/static/docs/style/style-guide.html For example, where to find the code for using the accordion table... See more...
Hi all, I'm wondering how to use the icons and styles in this page: http://127.0.0.1:8000/en-US/static/docs/style/style-guide.html For example, where to find the code for using the accordion table? I don't want to use js or css, only what in this page of Splunk . Regards,  
Hi there, i'm a new splunk user and try to use the new Dashboard Studio variant of dashboards like the last example described here: https://docs.splunk.com/Documentation/Splunk/8.2.4/DashStudio/inp... See more...
Hi there, i'm a new splunk user and try to use the new Dashboard Studio variant of dashboards like the last example described here: https://docs.splunk.com/Documentation/Splunk/8.2.4/DashStudio/inputs#Example:_Search-based_cascading_inputs My Problem is the values for the dynamic multiselect input have whitespaces in it and as soon as i use the "IN" operator in the search query this retruns no entries. If i manually change the search query and put all the values in quotes it is working as expected. Is there any way to do this in the definition of the input? I can also append a        eval appDisplayName = \"\\\"\".appDisplayName.\"\\\"\"       to the ds.search query but this also adds the quotes to the display portion.   My complete json looks like this:       { "visualizations": { "viz_hSyaQ4tf": { "type": "splunk.table", "options": {}, "dataSources": { "primary": "ds_saMdKSzT" } } }, "dataSources": { "ds_saMdKSzT": { "type": "ds.search", "options": { "query": "sourcetype=\"azure:aad:signin\" userPrincipalName=$userPrincipalName$ AND appDisplayName IN ($appDisplayName$) | table createdDateTime userPrincipalName userId appDisplayName appId resourceDisplayName resourceId conditionalAccessStatus status.errorCode", "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } }, "name": "SignIns" }, "ds_XdUxasDT": { "type": "ds.search", "options": { "query": "sourcetype=\"azure:aad:signin\" | stats count by userPrincipalName", "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } }, "name": "userPrincipalName-stats" }, "ds_GQslD2fp": { "type": "ds.search", "options": { "query": "sourcetype=\"azure:aad:signin\" userPrincipalName=$userPrincipalName$ | stats count by appDisplayName", "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } }, "name": "appDisplayName-stats" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_hcQWlw8q": { "title": "Select App", "type": "input.multiselect", "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "appDisplayName" }, "dataSources": { "primary": "ds_GQslD2fp" }, "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [ [ "All" ], [ "*" ] ], "label": ">primary | seriesByName(\"appDisplayName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"appDisplayName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_E26xAMU9": { "options": { "defaultValue": "user@domain.com", "token": "userPrincipalName" }, "title": "Select User", "type": "input.text" } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_hSyaQ4tf", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_E26xAMU9", "input_hcQWlw8q" ] }, "description": "", "title": "Azure AD SignIns" }         This produces the not working query like this:       sourcetype="azure:aad:signin" userPrincipalName=bauera@herrenknecht.com AND appDisplayName IN (Microsoft Office 365 Portal,Windows Sign In,Office365 Shell WCSS-Client) | table createdDateTime userPrincipalName userId appDisplayName appId resourceDisplayName resourceId conditionalAccessStatus status.errorCode        I want it to be like this:       sourcetype="azure:aad:signin" userPrincipalName=bauera@herrenknecht.com AND appDisplayName IN ("Microsoft Office 365 Portal","Windows Sign In","Office365 Shell WCSS-Client") | table createdDateTime userPrincipalName userId appDisplayName appId resourceDisplayName resourceId conditionalAccessStatus status.errorCode         Thanks for your help.   Greetings Andreas
I have a JSON with a field containing another object, but this object varies depending on type. For example, you may have these 3 logs under the same sourcetype/index: { "Log":"something","user": "m... See more...
I have a JSON with a field containing another object, but this object varies depending on type. For example, you may have these 3 logs under the same sourcetype/index: { "Log":"something","user": "me" ,"type":"car", "data": {"case1":"something"} } { "Log":"something","user": "me" ,"type":"apple", "data": {"fruity":"yummy"} } { "Log":"something","user": "me","type":"Cauliflower", "data":{"veggie":"eww", "fact":"good for you"} } and I want a table query to look something like this: user | data me    | {"case1":"something"}  me    | {"fruity":"yummy"} me    | {"veggie":"eww", "fact":"good for you"} I tried the following query: index=mylog | table user,data but my results usually look like this (with either nulls or straight up empty): user | data me    | null me    | me    | null data itself may sometimes be very long, but I would still like to see its entire output in the table. How can I go about this?
I was able to find the date when the correlation search was last updated, but cant seem to find the original creation date of a correlation search. 
Hello, I upload to splunk a csv with list of names (only one column) and I wand to add additional names to the csv. how can I do that? 
Is Type=Left the same as type=outer in Splunk?   If so why do they list it as three options? https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchReference/Join type Syntax: type=inner | oute... See more...
Is Type=Left the same as type=outer in Splunk?   If so why do they list it as three options? https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchReference/Join type Syntax: type=inner | outer | left Description: Indicates the type of join to perform. The difference between an inner and a left (or outer) join is how the events are treated in the main search that do not match any of the events in the subsearch. In both inner and left joins, events that match are joined. The results of an inner join do not include events from the main search that have no matches in the subsearch. The results of a left (or outer) join includes all of the events in the main search and only those values in the subsearch have matching field values. Default: inner  
hi Form my first panel, when I click on a row I want to display the results of the row Actually it opens the details for all row and not for a specific wrong What is wrong please?      <row> ... See more...
hi Form my first panel, when I click on a row I want to display the results of the row Actually it opens the details for all row and not for a specific wrong What is wrong please?      <row> <panel> <table> <title>Bureau : $Site$</title> <search base="sante"> <query>| stats count as "Nombre de lenteurs" by name | rename name as Nom | sort - "Nombre de lenteurs"</query> </search> <option name="drilldown">row</option> <format type="color" field="Nombre de lenteurs"> <colorPalette type="minMidMax" maxColor="#DC4E41" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format> <drilldown> <set token="name">$click.value$</set> </drilldown> </table> </panel> <panel depends="$name$"> <table> <title>Bureau : $Site$</title> <search base="sante"> <query>| stats count(web_app_duration_avg_ms) as "Nb lenteurs Web" count(hang_process_name) as "Nb hang", count(crash_process_name) as "Nb crash" by name | rename name as Nom</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>      
Hello there,  i've a report that is scheduled as follows: * * * * *  But in the next scheduled time i got 2022-01-20 11:53:40 CET but i want 2022-01-20 11:53:00 CET   Is there a way to set second... See more...
Hello there,  i've a report that is scheduled as follows: * * * * *  But in the next scheduled time i got 2022-01-20 11:53:40 CET but i want 2022-01-20 11:53:00 CET   Is there a way to set seconds?   TY
I have a problem when I set DLTK containers. I chose Golden Image CPU (3.7) from the list and I already pull phdrieger/mltk-container-golden-image-cpu:3.7.0 to the local docker, but always have the e... See more...
I have a problem when I set DLTK containers. I chose Golden Image CPU (3.7) from the list and I already pull phdrieger/mltk-container-golden-image-cpu:3.7.0 to the local docker, but always have the error say [list index out of range] Can someone help me, that would be great  
I have created a bar graph. The following is the query. index= "cx_metrics_analysis" sourcetype="cx_metrics_httpevent" | eval duration=floor((TASK_DURATION)/3600000)| bin duration span=2s|chart dis... See more...
I have created a bar graph. The following is the query. index= "cx_metrics_analysis" sourcetype="cx_metrics_httpevent" | eval duration=floor((TASK_DURATION)/3600000)| bin duration span=2s|chart distinct_count(TASK_NUM) as "Tasks" by duration | bin duration span=2 Since the bar graph is having a lot of values in x axis i'm trying to limit the values. I'm trying to group the values into 3. One which has duration less than 15, second one having duration between 15 to 25 and last one having duration greater than 25. | eval red = if(duration>25,duration,0) | eval yellow = if(duration<=25 AND duration>15,duration,0) | eval green = if(duration<=15, duration, 0) Is this the correct method to do this? Anyone knows how to solve this?
I know this can be done in the classic dashboard but is there a way to provide the tooltip/ hover functionality when using Dashboard Studio? 
Is there an option to add  Header & Footer with jpg in scheduled report  ?