All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In my search result, I have the "Description" field. The Description field contains both texts and 2 IP details. I want to check both IPs with my lookup table. If the IPs are not present in the lo... See more...
In my search result, I have the "Description" field. The Description field contains both texts and 2 IP details. I want to check both IPs with my lookup table. If the IPs are not present in the lookup then I need the result.  If the IPs are present in my lookup table then I want to filter the result.   Kindly help here.  
Does someone knows if it is still possible to pull the Exchange message tracking logs using the Microsoft Office 365 Reporting Add-on for Splunk? I have followed the setup instructions and it worked ... See more...
Does someone knows if it is still possible to pull the Exchange message tracking logs using the Microsoft Office 365 Reporting Add-on for Splunk? I have followed the setup instructions and it worked for 8 day circa then stopped working for a month, then suddenly started working again, but just for a day. Examining the logs I've noticed messages saying "HTTP 401 Unauthorized ... call not properly authenticated", but I've never changed credentials. In another question (https://community.splunk.com/t5/All-Apps-and-Add-ons/O365-message-tracking-logs/m-p/487992) I've read that " O365 no longer supports basic authentication for O365 to get those log files.", but it worked for a while so I do not understand. Did someone came up with a solution for this? Regards, -G.
Hello - I am using the following two searches: The first search is creating a table consisting of _time, idx, and b.  There are two other fields available, s for source and h for host.  However, w... See more...
Hello - I am using the following two searches: The first search is creating a table consisting of _time, idx, and b.  There are two other fields available, s for source and h for host.  However, we squash this information for performance reasons. index=_internal sourcetype=splunkd type=Usage source=*license_usage.log | table _time idx b | rename idx as index, b as bytes I have been trying to figure out a way to substitute the s & h data in the events by using a join, append, or appendcols using: | tstats count WHERE index=* sourcetype=* source=* unit_id=* by index, sourcetype, source, host, dept | table index, sourcetype, source,  host, dept Join Example: | tstats count WHERE sourcetype=* source=* host=* unit_id=* by index sourcetype source host dept | table index sourcetype source host dept | join type=inner index [ search index=_internal sourcetype=splunkd type=Usage source="/opt/splunk/var/log/splunk/license_usage.log" | table _time idx b | rename idx as index, b as bytes] Append Example: | tstats count WHERE sourcetype=* source=* host=* unit_id=* by index sourcetype source host dept | table index sourcetype source host dept | append [ search index=_internal sourcetype=splunkd type=Usage source="/opt/splunk/var/log/splunk/license_usage.log" | table _time idx b | rename idx as index, b as bytes] AppendCols Example: | tstats count WHERE sourcetype=* source=* host=* unit_id=* by index sourcetype source host dept | table index sourcetype source host dept | appendcols [ search index=_internal sourcetype=splunkd type=Usage source="/opt/splunk/var/log/splunk/license_usage.log" | table _time idx b | rename idx as index, b as bytes] Results: join: just fails with no data append: the _time and bytes fields are blank appendcols: leaves out the _time field - which I need to create timecharts with. The end result should look like this: _time  index   sourcetype   source   host   dept   bytes where _time, index, bytes comes from the _internal logs where index, sourcetype, source, host, dept comes from the | tstats logs Any help is greatly appreciated.  Thank you.
In order to administer ES better am trying to find the queries, searches an app makes in addition to what data models it uses. Thank u for your help in advance.
    Setup: Splunk enterprise is on a VM, everything works fine 1 workstation had a universal forwarder   Problem: I need them to talk to eachother on the stream part.   What I have done until... See more...
    Setup: Splunk enterprise is on a VM, everything works fine 1 workstation had a universal forwarder   Problem: I need them to talk to eachother on the stream part.   What I have done until now: (Splunk VM)I have added the stream app for splunk enterprise and restartet (Workstation)I have added the stream app manually to C:\SplunkUniversalForwarder\etc\apps\Splunk_TA_stream (Workstation)I have added into inputs.conf "splunk_stream_app_location =https://192.168.1.115:8000/en-us/custom/splunk_app_stream/" (Workstation)I have not added anything on the workstation to streamfwd   When I come to (Splunk VM) - I am lost:   What am I doing wrong?   Install of splunk stream into splunk enterprise (VM) was done with normal config, in other words I haven't changed where apps are installed, so everything is standard there.  I have tried to read: https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/ConfigureStreamForwarder   But I'm not getting what I'm doing wrong here.    Any suggestions please? thx
Hi All, I have Event timestamp with miliseconds: _time with Unix epoch seconds: and during search the timestamp is from _time, and I would like to have it with milliseconds.   I am u... See more...
Hi All, I have Event timestamp with miliseconds: _time with Unix epoch seconds: and during search the timestamp is from _time, and I would like to have it with milliseconds.   I am using KV_MODE in Search cluster props.conf.   [k8s:dev] KV_MODE = json     and I am trying to do changes in HF props.conf , like TIME_FIELDS, TIME_PREFIX, TIME_FORMAT, but none of them work. INDEXED_EXCTRACTION is turned OFF in HF props.conf HF props.conf   [k8s:dev] #temporary removed to fix https://jira/browse/DEVA-61153 #INDEXED_EXTRACTIONS = JSON #TIME_PREFIX = {\\"@timestamp\\":\\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N TIMESTAMP_FIELDS = @timestamp TRUNCATE = 200000 TRANSFORMS-discard_events = setnull_whitespace_indented,setnull_debug_logging SEDCMD-RemoveLogProp = s/("log":)(.*)(?="stream":)//   this is log, which is coming into the Splunk by HEC.   {"log":"{\"@timestamp\":\"2021-08-03T09:00:57.539+02:00\",\"@version\":\"1\",\"message\":     My question is: Do changes like TIME_FIELDS, TIME_PREFIX, TIME_FORMAT in HF have effect on this process when INDEXED_EXCTRACTION is not in use?   Thank you very much for your answers.
Hi,   can someone one help me with an SPL so that I can list the indexes of a datamodel. datamodel name - authentication.malware   Appreciate your help in advance.
I would like to find 1. all unique combination of actionKey, modelName, programName 2. only consider data if they have a confidence score > 70.00 Splunk Raw Log - 2021-08-04 07:35:39,069 INFO [bo... See more...
I would like to find 1. all unique combination of actionKey, modelName, programName 2. only consider data if they have a confidence score > 70.00 Splunk Raw Log - 2021-08-04 07:35:39,069 INFO [boundedElastic-87] [traceId="a4d01423048aa5de"] Request{userId='6699249',channelWise='SOCIAL', cid='1627958668279-9a93682610ee1c700c7e5d4ad01e8c76207274', sid=b8d2a070-f404-11eb-9cf4-5d474ec9ecbc, mlrecopred=[{actionKey=search, confidenceScore=83.46, modelName=model_forrest, programName=sapbased}, {actionKey=shipping_and_delivery, confidenceScore=82.94, modelName=model_forrest, programName=sapbased}, {actionKey=inventory_check, confidenceScore=65.21, modelName=model_forrest, programName=sapbased}, {actionKey=search, confidenceScore=63.46, modelName=event_handler, programName=sapbased}, {actionKey=shipping_and_delivery, confidenceScore=55.45, modelName=event_handler, programName=sapbased}], interactionId=0d6b031fdddba957, uniqueId='ed064f15d49c70ea7f540f7fe2ed2b7083e6eef8760f645f05d6600ad1208c3d'}
Hi All , i have configured alerts for the search below: index="ebs_red_0" host="dev-obiee-ux0*" source="/obiee_12c/app/oracle/product/12212/user_projects/domains/bi/nodemanager/nodemanager.log"... See more...
Hi All , i have configured alerts for the search below: index="ebs_red_0" host="dev-obiee-ux0*" source="/obiee_12c/app/oracle/product/12212/user_projects/domains/bi/nodemanager/nodemanager.log" "waiting for the process to die" Output : 8/3/21 9:38:11.000 AM dev-obiee-ux08 The server 'obips2' with process id 12714242 is no longer alive; waiting for the process to die. obips2 obiee:nodemanager:log Aug 3, 2021 5:38:11 AM EDT   but sometimes when my server process dies it restarts automatically within a 60 seconds which can be described as : index="ebs_red_0" host="dev-obiee-ux0*" source="/obiee_12c/app/oracle/product/12212/user_projects/domains/bi/nodemanager/nodemanager.log" "is running now" Output :  8/3/21 9:39:27.000 AM dev-obiee-ux08 The server 'obis2' is running now. obis2 obiee:nodemanager:log Aug 3, 2021 5:39:27 AM EDT   So i want to write the search query in a way so that i generate alert only if the server process dies and doesn't come up again within 120 seconds. the five fields used in the search are : _time, host ,Message ,OBIEE_Comp, sourcetype ,time   and to generate the alert the OBIEE_Comp needs to be same 
In the environment where Splunk is running, it is called "splunk-powershell.exe" The process is running. What role does this process play? This executable file was in the following folder, When I lo... See more...
In the environment where Splunk is running, it is called "splunk-powershell.exe" The process is running. What role does this process play? This executable file was in the following folder, When I looked up the property, there was no information. → C: \ Program Files \ SplunkUniversalForwarder \ bin \ Please tell me more about this process.
i have 2 servers  1 is Windows 2 is Unix The data(Cpu, Memory, Disk usage )on these two servers comes into splunk my Question is : I need an alert if their usage exceeds 90%. 
For example, one field of the email data model is "recipient" and it comes from the tag=email. However, my email information comes from the Microsoft O365 integration, where the recipient informatio... See more...
For example, one field of the email data model is "recipient" and it comes from the tag=email. However, my email information comes from the Microsoft O365 integration, where the recipient information is given in a field called "ExchangeDetails.Recipients{}". As far as I have been able to understand, I have to modify the "email" tag, in "Event Types" to look in "index=o365 Workload=Exchange" for email related logs. And after that, I have to create an alias so that "ExchangeDetails.Recipients{}" is equivalent to "recipient" as indicated in the data model. Is that correct? Thank you for your assistance
Hello everyone I have a question about using curl to query Splunk internal data from the outside, such as Send index = "from outside"_ Internal "| stats count returns a value. Do you have any relevan... See more...
Hello everyone I have a question about using curl to query Splunk internal data from the outside, such as Send index = "from outside"_ Internal "| stats count returns a value. Do you have any relevant documents? If so, please send a link. Thank you very much
Hi all, i have already integrated O365 using the O365 management API and collecting the user, admin, system, and policy actions and events for O365 https://docs.microsoft.com/en-us/office/office-... See more...
Hi all, i have already integrated O365 using the O365 management API and collecting the user, admin, system, and policy actions and events for O365 https://docs.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference I want to collect similar data from a local exchange server now but I don't know the logs.  The Splunk Add-on for Microsoft Exchange collects the following data using scripted inputs: Senderbase/reputation data. , Topology and Health information and Mailbox Server health and usage information Is there even similar data on a local MS exchange? and is that data no possible to be collected with a UF? Any help to direct me in the right direction would help. Best, N.
Hi, I have a query which returns around 4000 results and I want to run map query for all that 4000 results. This is the query but it doesn't return any results. Individual query are working fine. ... See more...
Hi, I have a query which returns around 4000 results and I want to run map query for all that 4000 results. This is the query but it doesn't return any results. Individual query are working fine. index=xxxxx_xxxxx2_idx ns=yyy-yyyy xxxx-t1-* totalDuration | spath input=message output=overallTimeTaken path=totalDuration | where overallTimeTaken > 226 | spath input=message output=yyy-yyyy-correlation-id-var path=yyy-yyyy-correlation-id | map search="search index=xxxxx_xxxxx2_idx ns=xxxx-api-v4 app_name=xxxxarngs-* xxxxRequestLoggingHandlerImpl $yyy-yyyy-correlation-id-var$ | head 1 | eval arngServerTimeTaken=mvindex(split(_raw," "),-2) | eval id=mvindex(split(_raw," "),-8) | stats id, max(arngServerTimeTaken) as arngServerTimeTaken | appendcols [ search index=xxxxx_xxxxx2_idx ns=xxxx-api-v4 app_name=xxxxtranslation-* xxxxRequestLoggingHandlerImpl $yyy-yyyy-correlation-id-var$ | head 1 | eval translationServerTimeTaken=mvindex(split(_raw," "),-2) | stats max(translationServerTimeTaken) as translationServerTimeTaken]" maxsearches=0 | table id, arngServerTimeTaken   The yyy-yyyy-correlation-id-var will be around 4000 from the first query which is going as an input to map. I need to make it work from map/multisearch as I have 10 other columns that I want to add to the result from other search queries.
Hi,   We are trying to move from single site to multisite splunk cluster. Although , its not clear how the SH clustering is supposed to work.   1. As per documentation, the recommended way is to ... See more...
Hi,   We are trying to move from single site to multisite splunk cluster. Although , its not clear how the SH clustering is supposed to work.   1. As per documentation, the recommended way is to have two separate SH clusters - But it doesn't look like we will have knowledge bundle (configs/user knowledge objects etc) replication between the two SH clusters formed. If this is the case then I don't get the point of suggesting multisite as a DR solution. When site1 fails, users connecting to site 2 wont have their knowledge objects and settings on the new SH cluster!? https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Multisitearchitecture   2. The other thing thats suggested  to have knowledge bundle and search artifact replication is to have a SH cluster spanning both the sites - BUT this also cant be suggested as a DR solution since in this case whenever the site with majority(or with same number) of SHs fail completely, the SH machines at the other site wont be able to form a cluster since they wont have majority. A work around is suggested here to deploy a static caption instead. https://docs.splunk.com/Documentation/Splunk/8.2.0/DistSearch/DeploymultisiteSHC
I used DBconnect to pull data from the database in every 1min *(cron: * /1 * * * *). I would like to ask if this schedule is simple as: #1 - Run every e.g. 10:00, then 10:01 then 10:02 and so on... ... See more...
I used DBconnect to pull data from the database in every 1min *(cron: * /1 * * * *). I would like to ask if this schedule is simple as: #1 - Run every e.g. 10:00, then 10:01 then 10:02 and so on... or? #2 - Run every e.g. 10:00 (wait until the job done) example 10:00:35 so the job will run at 10:01:35.  Please advise as we encountering missed of data when we tried the #1.  Thanks
Hi I've got some machine agent installations where I'm getting messages like this: [#|2021-08-02T14:38:39.254+1000|WARNING|glassfish 4.1|com.appdynamics.SIM|_ThreadID=80;_ThreadName=http-listener-2(... See more...
Hi I've got some machine agent installations where I'm getting messages like this: [#|2021-08-02T14:38:39.254+1000|WARNING|glassfish 4.1|com.appdynamics.SIM|_ThreadID=80;_ThreadName=http-listener-2(7);_TimeMillis=1627879119254;_LevelValue=900;|#SIM000121 The maximum number of monitored processes per machine allowed has been reached for machine 33. The limit sim.processes.count.maxPerMachine is set to 1000 processes. This limit will be reset after the next process purging or when some processes are deleted by the user. Could not create 9 processes for machine 33|#] I then go look at machine 33, and find that it's got numerous duplicates of the same processes, varying only in start/end time. It seems like if I increase the maxPerMachine limit, we'll just delay running into the limit again because it's constantly using up the count for the same processes over and over. This seems like a bug. Is there some workaround?
Hi, I've exceeded my configured match_limit in limits.conf with this regex: "log":\s"(?<log_source>.*?)\s(?<ISO8601>.*?)\| (?<exchangeId>.*?)\|(?<AUDIT_trackingId>.*?)\| (?<client_ip>.*?)\|(?<FAPI_i... See more...
Hi, I've exceeded my configured match_limit in limits.conf with this regex: "log":\s"(?<log_source>.*?)\s(?<ISO8601>.*?)\| (?<exchangeId>.*?)\|(?<AUDIT_trackingId>.*?)\| (?<client_ip>.*?)\|(?<FAPI_ip>.*?)\|(?<AUDIT_roundTripMS>.*?) ms\| (?<AUDIT_proxyRoundTripMS>.*?) ms\| (?<AUDIT_userInfoRoundTripMS>.*?) ms\| (?<AUDIT_resource>.*?)\s\[\]\s\/(?<AUDIT_subject>.*?)\/\*\:(?<dest_port>.*?)\|(?<AUDIT_authMech>.*?)\|(?<AUDIT_scopes>.*?)\| (?<AUDIT_client>.*?)\| (?<AUDIT_method>.*?)\| (?<AUDIT_requestUri>[^\s\?"|]++)(?<uri_query>\?[^\s"]*)?.*?\| (?<AUDIT_responseCode>.*?)\|(?<AUDIT_failedRuleType>.*?)\|(?<AUDIT_failedRuleName>.*?)\| (?<AUDIT_applicationName>.*?)\| (?<AUDIT_resourceName>.*?)\| (?<AUDIT_pathPrefix>.*?)\s Is there a way to make it more efficient? Please advise
Will Splunk do a stacked area chart?  I'm able to get an area chart, but it's not 'stacked' (so each proxy totals to an aggregate).  I'm wondering if splunk can even do that?  I looked at the documen... See more...
Will Splunk do a stacked area chart?  I'm able to get an area chart, but it's not 'stacked' (so each proxy totals to an aggregate).  I'm wondering if splunk can even do that?  I looked at the documentation and it appeared that it could, so I'm hoping maybe I'm just doing something wrong.  Under the 'Visualization" index = "myindex"| bin _time span=5m | stats sum(cs_bytes) as Bytes by proxy_server _time | eval Kbps=(((Bytes*8)/1000)/300) | timechart span=5m list(Kbps) by proxy_server