All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello hey, has anyone ever struggled to install the MS ad object app? During the installation it tells me that the baseline is not present in the index, while the sync with the active directory is... See more...
Hello hey, has anyone ever struggled to install the MS ad object app? During the installation it tells me that the baseline is not present in the index, while the sync with the active directory is well done.        
Hi, I want to monitor C++ Application which is in windows machine.  Is it possible to monitor it with only C/C++ SDK agent and without adding any API calls to source code or instrumentation? R... See more...
Hi, I want to monitor C++ Application which is in windows machine.  Is it possible to monitor it with only C/C++ SDK agent and without adding any API calls to source code or instrumentation? Regards, Hemanth Kumar.
Hello Community, As me and the team are trying to configure a custom deployment application which has to be implemented ONLY trough the command line. There should be no interaction with the UI whi... See more...
Hello Community, As me and the team are trying to configure a custom deployment application which has to be implemented ONLY trough the command line. There should be no interaction with the UI while configuring the custom app itself - this is a client request. The application gathers log data from three different log files located on a Windows Server 2019 and the main idea is to have this app in order to properly segment all the information coming from those logs. After the information is gathered it has to be searchable in the default Splunk Search Application. As read in the documentation - the deployment application is placed under : $SPLUNK_HOME/etc/deployment-apps The underlying file structure is as follows: local ( contains: inputs.conf, props.conf ) metadata ( contains: local.meta ) All the configuration which we have performed is defined into props.conf regarding the specification of the custom fields which we want to display into the Splunk Search Application. Underneath you can refer to the props.conf file itself: [inwebo:synclog] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %d.%m.%Y %H:%M:%S category = Miscellaneous description = InWebo Sync Activation Mails Log disabled = false pulldown_type = true MAX_TIMESTAMP_LOOKAHEAD = EXTRACT-TimeStamp = ^(?P<TimeStamp>\d+\.\d+\.\d+\s+\d+:\d+:\d+) [inwebo:iwdslog] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Miscellaneous pulldown_type = 1 EXTRACT-TimeDate = ^(?P<TimeDate>[^,]+) EXTRACT-Status = ^[^ \n]* (?P<Status>[^ ]+) MAX_TIMESTAMP_LOOKAHEAD = description = InWebo IWDS Log disabled = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S [inwebo:gdriveuploaderlog] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %d-%m-%Y %H:%M:%S category = Miscellaneous description = GDrive Uploader (InWebo Service) pulldown_type = 1 disabled = false EXTRACT-DateTime = ^(?P<DateTime>\d+\-\d+\-\d+\s+\d+:\d+:\d+) EXTRACT-Status = (?=[^U]*(?:Upload|U.*Upload))^(?:[^ \n]* ){3}(?P<Status>\w+) As you are able to observe, the bolded attributes is what we thought should be enough in order to have the necessary new fields presented into the Splunk Search Application. As for the inputs.conf file, we have pre-defined the necessary information about the logs location and also the index which the information should be gathered into. Here's a small sample just in case there is misconfiguration over there: [monitor://E:\inWebo-Prod-Varn-4358\log\IWDS.log] disabled = false index = mfa_inwebo sourcetype = iwdslog host = BJKW1PZJFLTFA01 ** the other logs are specified analogically into the same file In order to create a relation between the deployment application which we have implemented and the Default Splunk Search Application, we have added the index configuration into the Default Search Application configuration inside the indexes.conf file: [mfa_inwebo] coldPath = $SPLUNK_DB/mfa_inwebo/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/mfa_inwebo/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/mfa_inwebo/thaweddb archiver.enableDataArchive = 0 bucketMerging = 0 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableOnlineBucketRepair = 1 hotBucketStreaming.deleteHotsAfterRestart = 0 hotBucketStreaming.removeRemoteSlicesOnRoll = 0 hotBucketStreaming.reportStatus = 0 hotBucketStreaming.sendSlices = 0 metric.enableFloatingPointCompression = 1 metric.stubOutRawdataJournal = 1 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 tsidxWritingLevel = The main goal is to have an independent deployment application which could be easily transferred to another Search Head and being searchable without additional configuration. That is why we did not define the deployment-app props.conf file into the props.conf of the Default Search Application. The problem we are facing is that, we are not being able to visualize the field extractions into the Default Search Application - they are just not existent as they should be. The index is displayed, the source types are visible as well, all the log information which is necessary is available, but the custom fields are not present. So do you notice any reason why the custom field extractions which are bolded are not displayed into the search results ? I hope the information brings enough clarification, if not I am ready to provide additional resources. Thank you very much for your cooperation and support in advance. Nikola Atanasov
Hi everyone,   This might be a weird question, but I have been testing out ITSI and recently I tried editing some thresholds I set for KPIs I created myself. Regardless of the service or KPI I tr... See more...
Hi everyone,   This might be a weird question, but I have been testing out ITSI and recently I tried editing some thresholds I set for KPIs I created myself. Regardless of the service or KPI I try to edit, the Thresholding drop down does NOT open. I can open the "Search and Calculate" drop down no problem, as well as the "Anomaly Detection" drop down (and get a java warning to boot).  I feel like this might be an issue with the system, but why specifically Thresholding? What happens is the arrow will point down like it was opened, but nothing will actually happen. I tried different browsers too. If anyone can help me it would be very much appreciated.
Hi Spelun Community team, I have Observed High number of events(log) from WinEventLog:Security . Please suggest best practice or solution to reduces /suppresses the events. I have referred  below ... See more...
Hi Spelun Community team, I have Observed High number of events(log) from WinEventLog:Security . Please suggest best practice or solution to reduces /suppresses the events. I have referred  below Document  found that current ADD On i upto date. https://docs.splunk.com/Documentation/WindowsAddOn/8.1.1/User/Configuration    
Hi there. As in subject, how to make NMON aggregated data available NOT ONLY TO ADMIN users? I can query alla data from NMON since i'm admin in System, but i would like to make some metrics availab... See more...
Hi there. As in subject, how to make NMON aggregated data available NOT ONLY TO ADMIN users? I can query alla data from NMON since i'm admin in System, but i would like to make some metrics available to public normal users Dashboards, all query from normal users returns no data... how to? I tried to give read grant to ALL in all nmon eventtypes, but with no success. Should i have to edit ALL NMON objects??? Thanks
Hi there, we have implemented a component to send logs to splunk cloud but we are getting lot of extra blank lines along with the logs. We have implemented all the components using containerization... See more...
Hi there, we have implemented a component to send logs to splunk cloud but we are getting lot of extra blank lines along with the logs. We have implemented all the components using containerization technology and want to know why we are getting extra blank lines along with the logs. we are using "splunk" as docker loggingdriver and sending events via HttpEventCollector(HEC). 
Is there a way to export a glass table created in ITSI, so that it can be used as an iFrame link?  Currently receiving a '[stackname] refused to connect' error when attempting to add the glass tabl... See more...
Is there a way to export a glass table created in ITSI, so that it can be used as an iFrame link?  Currently receiving a '[stackname] refused to connect' error when attempting to add the glass table link to an internal website.
  I've got this error when testing to create an incident.
Good afternoon! I want to know how splunk stores data. I can't find detailed information. Can I connect a DBMS to splunk using the example: ms sql, mysql in order to store data that falls into splun... See more...
Good afternoon! I want to know how splunk stores data. I can't find detailed information. Can I connect a DBMS to splunk using the example: ms sql, mysql in order to store data that falls into splunk. What is the default database for splunk? What type of database is this database (relational or NoSQL). How does splunk store data, such as coming from json files.
If I save the data, will it be updated if the same data is included? Or is there no change?
We are trying to create a query to get list of fields in all sourcetypes grouped by sourcetype and index.  We tried to use following query but it's performance is very slow.  | tstats count WHERE... See more...
We are trying to create a query to get list of fields in all sourcetypes grouped by sourcetype and index.  We tried to use following query but it's performance is very slow.  | tstats count WHERE index IN(main,_introspection) GROUPBY index, sourcetype | rename index AS indexname, sourcetype AS sourcetypename | map maxsearches=100 search="| search index=\"$indexname$\" sourcetype=\"$sourcetypename$\" | head 1|fieldsummary | eval index=\"$indexname$\", sourcetype=\"$sourcetypename$\" | WHERE NOT isnull(mean) | fields index, sourcetype, field" Since there can be any number of sourcetypes(350+ for index=main), maxsearches cannot be set to such a high number. Is there any way to optimize this query to increase performance or any other query that will do the job without any performance lag?
I have a table with the next information: Fecha 31/08/2022 16:16:43 31/08/2022 16:19:48 31/08/2022 16:16:34 31/08/2022 16:16:40 I now want to group these infor  by day and hour start and hour e... See more...
I have a table with the next information: Fecha 31/08/2022 16:16:43 31/08/2022 16:19:48 31/08/2022 16:16:34 31/08/2022 16:16:40 I now want to group these infor  by day and hour start and hour end,  for example: 31/08/2022 16:16:34 - 16:19:48 The query: index=o365 sourcetype=o365:management:activity Operation=UserLoginFailed user= |stats count, values(user) as Usuario by _time |eval Fecha = strftime(max(_time), "%d/%m/%Y %H:%M:%S") |rename count as Contador |sort -Contador |table Fecha, Usuario, Contador Can you help me, please?
Hello All I have been asked to show trends for business requirement with the dataset I have. Possible past, present and 'possible' predict for 3\4 months. the only _time dataset I have is the "Week... See more...
Hello All I have been asked to show trends for business requirement with the dataset I have. Possible past, present and 'possible' predict for 3\4 months. the only _time dataset I have is the "WeekStarting": where events have occurred. To make it more relatable I need to show trends in login sharing.  Due to the magnitude of data, to make more sense out of the values I have selected 3 quarters over 3 years i.e (WeekStarting="2020-07-20" OR WeekStarting="2020-08-24" OR WeekStarting="2020-09-28" OR WeekStarting="2021-07-26" OR WeekStarting="2021-08-23" OR WeekStarting="2021-09-20" OR WeekStarting="2022-06-20" OR WeekStarting="2022-07-18" OR WeekStarting="2022-08-22" ). Now I don't have any day or any time series data which makes it difficult for me to make timechart or timewrap commands.  What I have used so far and many others: index="AB" sourcetype="AB" | spath | search (WeekStarting="2020-07-20" OR WeekStarting="2020-08-24" OR WeekStarting="2020-09-28" OR WeekStarting="2021-07-26" OR WeekStarting="2021-08-23" OR WeekStarting="2021-09-20" OR WeekStarting="2022-06-20" OR WeekStarting="2022-07-18" OR WeekStarting="2022-08-22" ) | stats values(TotalUsers) , values(DeviceTypes{}), values(WeekStarting), sum(Newbrowsertypes) as Aggregate_Logins by AccountID | where Aggregate_Logins >=5 I do know these are not trend commands. But I am really lost as to how I can incorporate trends with the dataset. Please help!!
how do i list the events that in an array has more than 1 item? 1) a:[ {"data1":"abc"},{"data1":"def"}] 2) a:[ {"data1":"abc"}] 3) a:[ {"data1":"abc"},{"data1":"def"}] 4) a:[ {"data1":"abc"}]... See more...
how do i list the events that in an array has more than 1 item? 1) a:[ {"data1":"abc"},{"data1":"def"}] 2) a:[ {"data1":"abc"}] 3) a:[ {"data1":"abc"},{"data1":"def"}] 4) a:[ {"data1":"abc"}] i want to list only events 1 and 3.
Hi, I have a search query where a field is named "user_email". I also have a lookup table where I have a list of emails. Now I want my search query to only show results where "user_email" is pr... See more...
Hi, I have a search query where a field is named "user_email". I also have a lookup table where I have a list of emails. Now I want my search query to only show results where "user_email" is present in the lookup table that I have. What command is most appropriate for this? 
Hello going through documentation on how to deploy a multi site index cluster.  The documentation on the course though suggests completing the installation in this order License Master Cluster... See more...
Hello going through documentation on how to deploy a multi site index cluster.  The documentation on the course though suggests completing the installation in this order License Master Cluster Master Indexer Indexer Indexer Search Head Link them all together to a single site cluster.  It then suggests to complete a multi site cluster with the following commands ./splunk edit cluster-config -mode manager -multisite true -site site1 -available_sites site1,site2 -site_replication_factor origin:1,total:2 -site_search_factor origin:1,total:2 -replication_factor 1 -search_factor 1 -secret idxcluster ./splunk edit cluster-config -site site1 ./splunk restart ./splunk edit cluster-config -site site1 ./splunk restart etc Anyway, my question is, is there a short cut to configuring a multi site cluster without having to configure a single site cluster first and then moving to a multi site cluster? And if there is, what is the commands?  Many thanks 
Hi All, I have two set of logs as below and I want a create a table combining them.       Type1: Log1: MACHINE@|@Port@|@Country@|@Count MEMORY STATUS mwgcb-csrla01u|8070|EAS|5 CNF_| PASS|mw... See more...
Hi All, I have two set of logs as below and I want a create a table combining them.       Type1: Log1: MACHINE@|@Port@|@Country@|@Count MEMORY STATUS mwgcb-csrla01u|8070|EAS|5 CNF_| PASS|mwgcb-csrla01u PASS|mwgcb-csrla02u Type2: Log1: source.mq.apac.sg.cards.eas.eas.raw.int.rawevent RUNNING|mwgcb-csrla02u RUNNING|mwgcb-csrla01u RUNNING|mwgcb-csrla02u Log2: source.mq.apac.in.cards.eas.eas.raw.int.rawevent RUNNING|mwgcb-csrla01u FAILED|mwgcb-csrla02u NA Log3: source.mq.apac.my.cards.eas.eas.raw.int.rawevent FAILED|mwgcb-csrla02u RUNNING|mwgcb-csrla01u NA Log4: source.mq.apac.th.cards.eas.eas.raw.int.rawevent RUNNING|mwgcb-csrla01u RUNNING|mwgcb-csrla01u NA Log5: source.mq.apac.hk.cards.eas.eas.raw.int.rawevent UNASSIGNED|mwgcb-csrla01u RUNNING|mwgcb-csrla01u RUNNING|mwgcb-csrla02u       I  extracted the required fields from each of the log types and am trying to create a table with the fields Machine_Name, Port, Worker_Node, Connector_Count, Success_Count where Success_Count is the number of Connectors that are in RUNNING state for a Worker_Node. For e.g.: For the above set of logs the table should look like Machine_Name Port Worker_Node Connector_Count Success_Count mwgcb-csrla01u 8070 EAS 5 3 I tried to combine the two set of logs by creating a query as below but not successful in getting the above table.       | multisearch [ search index=ABC host=XYZ source=KLM | regex _raw="\w+\-\w+\|\d+" | rex field=_raw "(?P<Machine_Name>\w+\-\w+)\|(?P<Port>\d+)\|(?P<Worker_Node>\w+)\|(?P<Connector_Count>\d+)\s" ] [search index=ABC host=XYZ source=KLM | regex _raw!="\w+\-\w+\|\d+" | regex _raw!="properties" | regex _raw!="MACHINE" | regex _raw!="CONNECTOR_NAME" | regex _raw!="CNF" | regex _raw!="Detailed" | rex field=_raw "(?P<Connector_Name>(\w+\.){3,12}\w+)\s" | rex field=_raw "(?P<Connector_Name>(\w+\-){3,12}\w+)\s" | rex field=_raw "(\w+\.){3,12}\w+\s(?P<Connector_State>\w+)\|" | rex field=_raw "(\w+\-){3,12}\w+\s(?P<Connector_State>\w+)\|" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|(?P<Worker_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|(?P<Worker_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task1_State>\w+)\|" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task1_State>\w+)\|" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker1_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker1_ID>\w+\-\w+)\s" | replace "mwgcb-csrla01u_XX_" with "mwgcb-csrla01u" in Worker1_ID | replace "mwgcb-csrla02u_XX_" with "mwgcb-csrla02u" in Worker1_ID | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task2_State>\w+)" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task2_State>\w+)" | replace "NA" with "Not_Available" in Task2_State | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker2_ID>\w+\-\w+)" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker2_ID>\w+\-\w+)" | replace "mwgcb-csrla01u_XX_" with "mwgcb-csrla01u" in Worker2_ID | replace "mwgcb-csrla02u_XX_" with "mwgcb-csrla02u" in Worker2_ID | fillnull value="Not_Available" Task1_State, Worker1_ID, Task2_State, Worker2_ID ] | lookup Worker_Connector_List.csv "Connector_Name" | search Worker_Node=EAS | stats latest(Connector_State) as Connector_State by Connector_Name | eval Status=if(Connector_State="RUNNING", "1","0") | stats sum(Status) as Success_Count | table Machine_Name,Port,Worker_Node,Connector_Count,Success_Count         Please help me to create/modify the query so that I can get the table in the desired manner.   Thank you All..!!
Hello, I'm trying to draw a cumulative timechart using a csv file which contains events, each event with its starting date and its ending date (basically three fields : "EventName", "StartingDate" ... See more...
Hello, I'm trying to draw a cumulative timechart using a csv file which contains events, each event with its starting date and its ending date (basically three fields : "EventName", "StartingDate" and "EndingDate"). The line in the chart should increase when an event starts and decrease when an event finishes. I attached an example of what I am trying to explain, hope it helps. I tried to create time ranges with the starting and ending date to draw the chart I want, but I'm not sure it's the correct way to do it... Thanks in advance
Our app is enclosed within a Docker container environment.  We can access the app only through standard web interfaces and APIs.  We have no access to the underlying operating s... See more...
Our app is enclosed within a Docker container environment.  We can access the app only through standard web interfaces and APIs.  We have no access to the underlying operating system.  So, through an API we retrieve the logs and store them on a remote server.  We unzip them, put them in the known paths, and the Splunk UF on that device forwards them to Splunk.   We retrieve our logs every hour.  They overwrite what is there.  This means that when seen by the Splunk UF, they appear to be new logs.  However, within them they are the same file, just with another hour of data in them.    Could you please advise on how to deal with those seemingly duplicate log information? Is there a way to work the results in a Splunk pipe search? Or should we adjust it in our log collection process before the Splunk UF send them to the Splunk Cloud Plattform?   Thank you.