All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In the environment where Splunk is running, it is called "splunk-powershell.exe" The process is running. What role does this process play? This executable file was in the following folder, When I lo... See more...
In the environment where Splunk is running, it is called "splunk-powershell.exe" The process is running. What role does this process play? This executable file was in the following folder, When I looked up the property, there was no information. → C: \ Program Files \ SplunkUniversalForwarder \ bin \ Please tell me more about this process.
i have 2 servers  1 is Windows 2 is Unix The data(Cpu, Memory, Disk usage )on these two servers comes into splunk my Question is : I need an alert if their usage exceeds 90%. 
For example, one field of the email data model is "recipient" and it comes from the tag=email. However, my email information comes from the Microsoft O365 integration, where the recipient informatio... See more...
For example, one field of the email data model is "recipient" and it comes from the tag=email. However, my email information comes from the Microsoft O365 integration, where the recipient information is given in a field called "ExchangeDetails.Recipients{}". As far as I have been able to understand, I have to modify the "email" tag, in "Event Types" to look in "index=o365 Workload=Exchange" for email related logs. And after that, I have to create an alias so that "ExchangeDetails.Recipients{}" is equivalent to "recipient" as indicated in the data model. Is that correct? Thank you for your assistance
Hello everyone I have a question about using curl to query Splunk internal data from the outside, such as Send index = "from outside"_ Internal "| stats count returns a value. Do you have any relevan... See more...
Hello everyone I have a question about using curl to query Splunk internal data from the outside, such as Send index = "from outside"_ Internal "| stats count returns a value. Do you have any relevant documents? If so, please send a link. Thank you very much
Hi all, i have already integrated O365 using the O365 management API and collecting the user, admin, system, and policy actions and events for O365 https://docs.microsoft.com/en-us/office/office-... See more...
Hi all, i have already integrated O365 using the O365 management API and collecting the user, admin, system, and policy actions and events for O365 https://docs.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference I want to collect similar data from a local exchange server now but I don't know the logs.  The Splunk Add-on for Microsoft Exchange collects the following data using scripted inputs: Senderbase/reputation data. , Topology and Health information and Mailbox Server health and usage information Is there even similar data on a local MS exchange? and is that data no possible to be collected with a UF? Any help to direct me in the right direction would help. Best, N.
Hi, I have a query which returns around 4000 results and I want to run map query for all that 4000 results. This is the query but it doesn't return any results. Individual query are working fine. ... See more...
Hi, I have a query which returns around 4000 results and I want to run map query for all that 4000 results. This is the query but it doesn't return any results. Individual query are working fine. index=xxxxx_xxxxx2_idx ns=yyy-yyyy xxxx-t1-* totalDuration | spath input=message output=overallTimeTaken path=totalDuration | where overallTimeTaken > 226 | spath input=message output=yyy-yyyy-correlation-id-var path=yyy-yyyy-correlation-id | map search="search index=xxxxx_xxxxx2_idx ns=xxxx-api-v4 app_name=xxxxarngs-* xxxxRequestLoggingHandlerImpl $yyy-yyyy-correlation-id-var$ | head 1 | eval arngServerTimeTaken=mvindex(split(_raw," "),-2) | eval id=mvindex(split(_raw," "),-8) | stats id, max(arngServerTimeTaken) as arngServerTimeTaken | appendcols [ search index=xxxxx_xxxxx2_idx ns=xxxx-api-v4 app_name=xxxxtranslation-* xxxxRequestLoggingHandlerImpl $yyy-yyyy-correlation-id-var$ | head 1 | eval translationServerTimeTaken=mvindex(split(_raw," "),-2) | stats max(translationServerTimeTaken) as translationServerTimeTaken]" maxsearches=0 | table id, arngServerTimeTaken   The yyy-yyyy-correlation-id-var will be around 4000 from the first query which is going as an input to map. I need to make it work from map/multisearch as I have 10 other columns that I want to add to the result from other search queries.
Hi,   We are trying to move from single site to multisite splunk cluster. Although , its not clear how the SH clustering is supposed to work.   1. As per documentation, the recommended way is to ... See more...
Hi,   We are trying to move from single site to multisite splunk cluster. Although , its not clear how the SH clustering is supposed to work.   1. As per documentation, the recommended way is to have two separate SH clusters - But it doesn't look like we will have knowledge bundle (configs/user knowledge objects etc) replication between the two SH clusters formed. If this is the case then I don't get the point of suggesting multisite as a DR solution. When site1 fails, users connecting to site 2 wont have their knowledge objects and settings on the new SH cluster!? https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Multisitearchitecture   2. The other thing thats suggested  to have knowledge bundle and search artifact replication is to have a SH cluster spanning both the sites - BUT this also cant be suggested as a DR solution since in this case whenever the site with majority(or with same number) of SHs fail completely, the SH machines at the other site wont be able to form a cluster since they wont have majority. A work around is suggested here to deploy a static caption instead. https://docs.splunk.com/Documentation/Splunk/8.2.0/DistSearch/DeploymultisiteSHC
I used DBconnect to pull data from the database in every 1min *(cron: * /1 * * * *). I would like to ask if this schedule is simple as: #1 - Run every e.g. 10:00, then 10:01 then 10:02 and so on... ... See more...
I used DBconnect to pull data from the database in every 1min *(cron: * /1 * * * *). I would like to ask if this schedule is simple as: #1 - Run every e.g. 10:00, then 10:01 then 10:02 and so on... or? #2 - Run every e.g. 10:00 (wait until the job done) example 10:00:35 so the job will run at 10:01:35.  Please advise as we encountering missed of data when we tried the #1.  Thanks
Hi I've got some machine agent installations where I'm getting messages like this: [#|2021-08-02T14:38:39.254+1000|WARNING|glassfish 4.1|com.appdynamics.SIM|_ThreadID=80;_ThreadName=http-listener-2(... See more...
Hi I've got some machine agent installations where I'm getting messages like this: [#|2021-08-02T14:38:39.254+1000|WARNING|glassfish 4.1|com.appdynamics.SIM|_ThreadID=80;_ThreadName=http-listener-2(7);_TimeMillis=1627879119254;_LevelValue=900;|#SIM000121 The maximum number of monitored processes per machine allowed has been reached for machine 33. The limit sim.processes.count.maxPerMachine is set to 1000 processes. This limit will be reset after the next process purging or when some processes are deleted by the user. Could not create 9 processes for machine 33|#] I then go look at machine 33, and find that it's got numerous duplicates of the same processes, varying only in start/end time. It seems like if I increase the maxPerMachine limit, we'll just delay running into the limit again because it's constantly using up the count for the same processes over and over. This seems like a bug. Is there some workaround?
Hi, I've exceeded my configured match_limit in limits.conf with this regex: "log":\s"(?<log_source>.*?)\s(?<ISO8601>.*?)\| (?<exchangeId>.*?)\|(?<AUDIT_trackingId>.*?)\| (?<client_ip>.*?)\|(?<FAPI_i... See more...
Hi, I've exceeded my configured match_limit in limits.conf with this regex: "log":\s"(?<log_source>.*?)\s(?<ISO8601>.*?)\| (?<exchangeId>.*?)\|(?<AUDIT_trackingId>.*?)\| (?<client_ip>.*?)\|(?<FAPI_ip>.*?)\|(?<AUDIT_roundTripMS>.*?) ms\| (?<AUDIT_proxyRoundTripMS>.*?) ms\| (?<AUDIT_userInfoRoundTripMS>.*?) ms\| (?<AUDIT_resource>.*?)\s\[\]\s\/(?<AUDIT_subject>.*?)\/\*\:(?<dest_port>.*?)\|(?<AUDIT_authMech>.*?)\|(?<AUDIT_scopes>.*?)\| (?<AUDIT_client>.*?)\| (?<AUDIT_method>.*?)\| (?<AUDIT_requestUri>[^\s\?"|]++)(?<uri_query>\?[^\s"]*)?.*?\| (?<AUDIT_responseCode>.*?)\|(?<AUDIT_failedRuleType>.*?)\|(?<AUDIT_failedRuleName>.*?)\| (?<AUDIT_applicationName>.*?)\| (?<AUDIT_resourceName>.*?)\| (?<AUDIT_pathPrefix>.*?)\s Is there a way to make it more efficient? Please advise
Will Splunk do a stacked area chart?  I'm able to get an area chart, but it's not 'stacked' (so each proxy totals to an aggregate).  I'm wondering if splunk can even do that?  I looked at the documen... See more...
Will Splunk do a stacked area chart?  I'm able to get an area chart, but it's not 'stacked' (so each proxy totals to an aggregate).  I'm wondering if splunk can even do that?  I looked at the documentation and it appeared that it could, so I'm hoping maybe I'm just doing something wrong.  Under the 'Visualization" index = "myindex"| bin _time span=5m | stats sum(cs_bytes) as Bytes by proxy_server _time | eval Kbps=(((Bytes*8)/1000)/300) | timechart span=5m list(Kbps) by proxy_server
I have a requirement to forward search results of a query to an indexer of an external organization. The volume of this data would be fairly high. I understand there are a multiple ways to achieve t... See more...
I have a requirement to forward search results of a query to an indexer of an external organization. The volume of this data would be fairly high. I understand there are a multiple ways to achieve this. I am thinking to use a script to run every 5 mins to grab the search results via REST API and store it locally on the disk and forward it from there via outputs.conf I also understand this would be very to do via script but only challenge is I am not that experienced with scripting stuff, hence little unsure.  Hence, wondering if anyone can please share if there would be an easier way than doing this via a script.
Dear Community,  I am writing a search for windows services. I am trying to find out the number of hosts that having/not having a certain service.   Here is the search that I have  to find out serv... See more...
Dear Community,  I am writing a search for windows services. I am trying to find out the number of hosts that having/not having a certain service.   Here is the search that I have  to find out servers that having the services running:  index=*_oswin sourcetype="WMI:Service" source="WMI:Service" Name="Appdynamics Machine Agent" | dedup host  | stats sum()   How can I do the second part please? Also, I want to integrate those two numbers into one pie chart. Any suggestion is highly appreicated! 
Hi - Was looking for some assistance in extracting the FQDNs from the paths below: /var/log/remote/ldap.inftech.net/2021-08-03/auth.log /var/log/remote/web-proxy-01.int.inftech.net/2021-08-03/proxy... See more...
Hi - Was looking for some assistance in extracting the FQDNs from the paths below: /var/log/remote/ldap.inftech.net/2021-08-03/auth.log /var/log/remote/web-proxy-01.int.inftech.net/2021-08-03/proxy.log /var/log/remote/ns01.inftech.net/2021-08-03/named.log Regex isn't my strongest area, and one of the domains has an additional level, which makes it that much harder for me.
I keep getting the following error when I try to launch the splunk on web browser, how do I resolve this please?  Note: I have a functioning internet Thank you. This site can’t be reached 127.0.0... See more...
I keep getting the following error when I try to launch the splunk on web browser, how do I resolve this please?  Note: I have a functioning internet Thank you. This site can’t be reached 127.0.0.1 refused to connect. Try: Checking the connection   ERR_CONNECTION_REFUSED
Hi, I'm pretty new to Splunk and I'm creating a dashboard for one of my environments.  One thing I can't figure out is how to populate a table with entries from multiple fields into a  table sorted b... See more...
Hi, I'm pretty new to Splunk and I'm creating a dashboard for one of my environments.  One thing I can't figure out is how to populate a table with entries from multiple fields into a  table sorted by host.  So it should look like this. HOST             VOLUME NAMES A                       ARC B                       ARC, LIV, FOR C                      LIV, FOR, FUN The host and all of the volume names come from different fields.  Any help would be greatly appreciated.  
I am trying to create a new process to have a service (non-admin) account adding new Search Heads into a cluster. Specifically, need enough capabilities to to have the service account initialize and ... See more...
I am trying to create a new process to have a service (non-admin) account adding new Search Heads into a cluster. Specifically, need enough capabilities to to have the service account initialize and add a search head to a cluster. I want to avoid giving "admin-all-objects" as its too much privilege and want to adhere to least-priv policy.   I created a new local account and added it to the deployer, SH cluster, and the new SH. I then added capabilities related to SH clustering so that i can have this service account initialize and add SH to a cluster. However, i am getting errors related to permissions. Capabilities added: edit_restmap edit_search_head_clustering edit_search_server edit_server list_search_head_clustering rest_apps_management rest_apps_view rest_properties_get rest_properties_set restart_splunkd     Error when trying to initialize SH:    /opt/splunk/bin/splunk init shcluster-config <CLUSTER INFO>> User 'shcluster_config' with roles { shcluster_config, user-shcluster_config } cannot write: /nobody/system/server { read : [ * ], write : [ admin ] }, removable: no       Anybody know what capability i need to give this service account enough access to add new SHs to a cluster?   Thank you
hi all, I have a specific webhook url which has been used in multiple splunk alerts. Now I want to change that webhook. I was trying to figure out, is there any way I can figure out what are the aler... See more...
hi all, I have a specific webhook url which has been used in multiple splunk alerts. Now I want to change that webhook. I was trying to figure out, is there any way I can figure out what are the alerts which are using this particular webhook
Hi, What is the best query to map this promethues query in splunk query language? Prometheus query: 100*sum_over_time(metric_name_gauge{}[1d:1m])/1440 metric_name_gauge  possible values 0 o... See more...
Hi, What is the best query to map this promethues query in splunk query language? Prometheus query: 100*sum_over_time(metric_name_gauge{}[1d:1m])/1440 metric_name_gauge  possible values 0 or 1. This query adds the values of metric_name_gauge for a period if 1 day with a resolution of 1 minute and then, it divides the result by the number of minutes in a day which is 1440 minutes. --> a period to time of 1d. Any idea How to implement this query using Splunk Query Language... Thanks in advance.
I'm looking to combine data from a lookup file to data from our security server to create a comparison chart between how many alarms we get (security server) and how many of those are acknowledged (l... See more...
I'm looking to combine data from a lookup file to data from our security server to create a comparison chart between how many alarms we get (security server) and how many of those are acknowledged (lookup file). I figured multisearch was the way to go but I'm getting errors when using it. The search is below. The reason for the eval Date fields are because one column contains dates so I needed to get them in the right order since they were always out. The end goal is a daily chart showing x alarms and y acknowledgements.    | multisearch [| inputlookup genericlookupname.csv | eval Date=strptime(Date,"%m/%d/%Y") | sort Date | eval Date=strftime(Date,"%m/%d/%Y")] [search index=index EVDESCR="alarm"]