All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I want to know if it is possible to show the number of impacted records in last 15 mins for the below search:  Query: index = events_prod_tio_omnibus_esa ( "SESE023" OR "SESE020" OR "SESE030"... See more...
Hi  I want to know if it is possible to show the number of impacted records in last 15 mins for the below search:  Query: index = events_prod_tio_omnibus_esa ( "SESE023" OR "SESE020" OR "SESE030" ) Result:    Requirement :  For the above search, if the search is executed at : 11:30 ==> It will show 0 records  11:40 ==> It will show 2 records (as the last event raised on 11:37:14 is having 2 records and currenttime - event time < 15 mins) 11:50 ==> It will show 2 records (as the last event raised on 11:37:14 is having 2 records and currenttime - event time < 15 mins) 11:55 ==> It will show 0 records (as the last event raised on 11:37:14 is having 2 records but currenttime - event time >15 mins)  
Hi Community, I'm working on script input. I have created a script to convert binary code logs into human read-able format and it is working fine. Now the issue is the file im monitoring is in "/v... See more...
Hi Community, I'm working on script input. I have created a script to convert binary code logs into human read-able format and it is working fine. Now the issue is the file im monitoring is in "/var/log/test" directory.  The script is in "/opt/splunk/etc/apps/testedScript/bin/testedscript.sh" directory. I'm getting script address as source in Splunk. Attaching screenshot as reference.  below is my inputs.conf stanza I'm using (/opt/splunk/etc/apps/testScript/local/inputs.conf): [script:///opt/splunk/etc/apps/testScript/bin/testedScript.sh] disabled=false index=testing interval=30 sourcetype=free2 Is there anyway i can get exact source address like in my case it is "/var/log/test/file1"
I just received a mail stating, past June 14 we won't be even able to view the past support tickets.  I see it as a blocker for learning. Because whenever I face an issue, I refer to the past tickets... See more...
I just received a mail stating, past June 14 we won't be even able to view the past support tickets.  I see it as a blocker for learning. Because whenever I face an issue, I refer to the past tickets and learn from that before actually creating a ticket. Past tickets can be available atleast as HTML to view. Kindly let me know if there are any such plans.
Hi, I have ingested the qualys data using the Qualys TA addon and enabled the inputs to run once every 24 hours. Im ingesting the host detection and knowledge logs into Splunk. The requirement is ... See more...
Hi, I have ingested the qualys data using the Qualys TA addon and enabled the inputs to run once every 24 hours. Im ingesting the host detection and knowledge logs into Splunk. The requirement is to create a dashboard with multiple multiselect filters and do the enrichment from our database. But I found that the data in qualys is different from Splunk logs. And the inputs is ingesting only a certain amount of data.   My ask is I want to ingest complete data every time the inputs runs , so that I get accurate data and use it in dashboards. Please help me.   Regards, Dayal
Hi, We are looking for migration guidance from Exabeam to Splunk . Is there a way to migrate data from Exabeam data lake to Splunk ? Also any documentation, guidelines present for Exabeam customer... See more...
Hi, We are looking for migration guidance from Exabeam to Splunk . Is there a way to migrate data from Exabeam data lake to Splunk ? Also any documentation, guidelines present for Exabeam customer migration to Splunk. Please let me know. Thanks. Guru
Good Morning  Does anyone currently use Splunk or an App in Splunk to monitor folder size?  We are currently been asked to set up new folders for fileshare for various teams and as our storage reso... See more...
Good Morning  Does anyone currently use Splunk or an App in Splunk to monitor folder size?  We are currently been asked to set up new folders for fileshare for various teams and as our storage resource are near end we'd like to monitor each users' folder size. The ideal scenario would be that there would be a threshold in size put on each folder and when the folder is near capacity then an alert would trigger and the IT Team would take action.  Kind regards,   Paula      
Referring to previous question (Solved: How to insert hyperlink to the values of a column ... - Splunk Community) how can I add 2 different URLs for 2 different columns in the table such that, the re... See more...
Referring to previous question (Solved: How to insert hyperlink to the values of a column ... - Splunk Community) how can I add 2 different URLs for 2 different columns in the table such that, the respective hyperlink opens only when the value in the respective column is clicked.             "eventHandlers": [                 {                     "type": "drilldown.customUrl",                     "options": {                         "url": "$row.firstLink.value$",                         "newTab": true                     }                 },                 {                     "type": "drilldown.customUrl",                     "options": {                         "url": "$row.secondLink.value$",                         "newTab": true                     }                 }             ]
hello , I have a dashboard in which there are many panels and in each panels I am using geostats command to show the results of the search of  that particular panel in world map. I want to add zoom f... See more...
hello , I have a dashboard in which there are many panels and in each panels I am using geostats command to show the results of the search of  that particular panel in world map. I want to add zoom feature in it.  Let me explain So lets say I am on panel 1 and i have zoom on America  to see in which area are the results showing just like this. Now what I want is that if I switch to different panel it should also be zoomed in from America. Is that possible.
Hi All, I want to filter out null values.In my field the ImpCon having null values.Now i want to filter the values which i dont want to show in the table.I am trying below query .which is showing t... See more...
Hi All, I want to filter out null values.In my field the ImpCon having null values.Now i want to filter the values which i dont want to show in the table.I am trying below query .which is showing the null values. | eval ImpCon=mvmap(ImpConReqID,if(match(ImpConReqID,".+"),"ImpConReqID: ".ImpConReqID,null())) | eval orcaleid=mvfilter(isnotnull(oracle)) | eval OracleResponse=mvjoin(orcaleid," ")
Specifically speaking the dataSources section discussed here: https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dashDef#The_dataSources_section   Hypothetically, I have two tables, eac... See more...
Specifically speaking the dataSources section discussed here: https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dashDef#The_dataSources_section   Hypothetically, I have two tables, each stored in individual data source stanzas: Table 1 = ds.search stanza 1 Table 2 = ds.search stanza 2 The goal is to append the tables together, and then use the "stats join" method to merge the two tables together. If possible, this merge could be done as a ds.chain type stanza with two extend options, but it does not appear to be allowed. Here's the documentation for Data source options. https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dsOpt  The document seems to be missing options like "extend", so I'm hoping someone knows if there's any additional options that is hidden. Now, I am trying to avoid using the [] subsearches because of 50,000 row limit, so the following append command will not be desired: <base search> | append [search ....] Anyone with mastery of JSON hacks might know if appending two data sources stanzas together be possible? Thank you.
hi , i have installed Splunk on my Ubuntu desktop. i have logged in once . however during second time log in it said unable to connect  
Hello. I am interested in data that occurs from Tuesday night on 8 PM until 6 AM. The caveat is that I need 2 separate time periods to compare. One of which is the 2nd Tuesday of each month until the... See more...
Hello. I am interested in data that occurs from Tuesday night on 8 PM until 6 AM. The caveat is that I need 2 separate time periods to compare. One of which is the 2nd Tuesday of each month until the 3rd Thursday. The other is any other day in the month.  So far I have:  | eval day_of_week=strftime(_time, "%A") | eval week_of_month=strftime(_time, "%U" ) | eval day_of_month=strftime(_time, "%d") | eval start_target_period=if(day_of_week=="Tuesday" AND week_of_month>1 AND week_of_month<4, "true", "false") | eval end_target_period=if(day_of_week=="Thursday" AND week_of_month>2 AND week_of_monthr<4, "true", "false") | eval hour=strftime(_time, "%H") | eval time_bucket=case( (start_target_period="true" AND hour>="20") OR (end_target_period="true" AND hour<="06"), "Target Period", (hour>="20" OR hour<="06"), "Other Period" ) My issue is that my "week of month" field is reflecting the week of the year. Any help would be greatly appreciated.  EDIT: I placed this in the wrong location, all apologies. 
Hello, Recently we replaced our Syslog server from rsyslog to syslog-ng.  We are collecting the network device's log - every source logged its own <IPaddress.log> file. Universal forwarder pushing t... See more...
Hello, Recently we replaced our Syslog server from rsyslog to syslog-ng.  We are collecting the network device's log - every source logged its own <IPaddress.log> file. Universal forwarder pushing them to the indexer.  Inputs, outputs are ok the data flowing, sourcetype is standard syslog. Everything is working as expected... Except for some sources... I spotted this because the log volume has dropped since the migration. For those, I do not have all of the events in Splunk.  I can see the file on the syslog server, let's say there are 5 events per minute. The events are the same - for example, XY port is down - but not identical; the timestamp in the header and the timestamp in the event's message are different. (events are still the same length). So in the log file, there are 5 events/min, but in Splunk, I can see only one event per 5 minutes. The rest are missing... Splunk randomly picks ~10% of the events from the log file (all the extractions are ok for those, there is no special character or something in the "dropped" events...) I feel it is because of similar events - Splunk thinks they are duplicated - but other hand it cannot be, because they are different. Any advice? Should I try to add some crc salt or try to change the sourcetype? BR. Norbert  
I am new to Splunk Mission Control and assigned to demo the Splunk Cloud platform with the following features:  Incident Management: Simplifies the detection, prioritization, and response process. ... See more...
I am new to Splunk Mission Control and assigned to demo the Splunk Cloud platform with the following features:  Incident Management: Simplifies the detection, prioritization, and response process. Investigative Capabilities: Integrates diverse data sources for thorough investigations. Automated Workflows: Reduces repetitive tasks through automation. Collaboration Tools: Facilitates communication and information sharing within the SOC team. Details: Provide examples of automated workflows specific to common SOC scenarios. Can somebody provide me with links to "How to Videos and documentation to set up up my Demo. Thank You
Dot net agent status is at 100% after deleting .Net and Machine agent. All the servers were rebooted and checked on the server for AppD related services, folders. They were all removed.  Could this ... See more...
Dot net agent status is at 100% after deleting .Net and Machine agent. All the servers were rebooted and checked on the server for AppD related services, folders. They were all removed.  Could this be related to old data still reflecting on AppD controller ? 
I use appdynamics to send a daily report on Slow or failed transactions and while the email digest report is helpful is there a way to include more detailed information about the data collectors (nam... See more...
I use appdynamics to send a daily report on Slow or failed transactions and while the email digest report is helpful is there a way to include more detailed information about the data collectors (name and value) in the email digest report?  Is this something done using custom email templates?
index=abcd "API : access : * : process : Payload:" |rex "\[INFO \] \[.+\] \[(?<ID>.+)\] \:" |rex " access : (?<Event>.+) : process" |stats count as Total by Event |join type=inner ID [|search index=a... See more...
index=abcd "API : access : * : process : Payload:" |rex "\[INFO \] \[.+\] \[(?<ID>.+)\] \:" |rex " access : (?<Event>.+) : process" |stats count as Total by Event |join type=inner ID [|search index=abcd "API" AND ("Couldn't save") |rex "\[ERROR\] \[API\] \[(?<ID>.+)\] \:" |dedup ID |stats count as Failed ] |eval Success=Total-Failed |stats values(Total),values(Success),values(Failed) by Event Event values(Total) values(Success) values(Failed) Event1 76303 76280 23 Event2 4491 4468 23 Event3 27140 27117 23 Event4 118305 118282 23 Event5 318810 318787 23 Event6 9501 9478 23 I am trying to join to different search (index is common) on ID field and then trying to group them by "Event" field but the Failed column is showing the same value for all the events.
is there playbook for this kind of thing? playbook "user password policy enforcement "
Hello. I'm using the trial and following the instructions for sending to APM with a manually instrumented Python app as seen below:       apiVersion: apps/v1 kind: Deployment spec: selector: ... See more...
Hello. I'm using the trial and following the instructions for sending to APM with a manually instrumented Python app as seen below:       apiVersion: apps/v1 kind: Deployment spec: selector: matchLabels: app: your-application template: spec: containers: - name: myapp env: - name: SPLUNK_OTEL_AGENT valueFrom: fieldRef: fieldPath: status.hostIP - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://$(SPLUNK_OTEL_AGENT):4317" - name: OTEL_SERVICE_NAME value: "blah" - name: OTEL_RESOURCE_ATTRIBUTES value: "service.version=1"         If I'm using the Splunk distribution of the otel collector, how can I get the dns name of the `OTEL_EXPORTER_OTLP_ENDPOINT` without having to use `status.HostIp`? 
Hello, Does the below log paths of windows logs can be ingested into Splunk and if this is available in any add-on's? Microsoft\Windows\Privacy-Auditing\Operational EventLog Thanks