All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I recently updated Splunk_TA_Windows and am seeing this error on my search head cluster: [Indexers] Could not load lookup=LOOKUP-user_account_control_property This is an automatic lookup generat... See more...
I recently updated Splunk_TA_Windows and am seeing this error on my search head cluster: [Indexers] Could not load lookup=LOOKUP-user_account_control_property This is an automatic lookup generated in the default directory of the app. I'm not familiar with it and am not seeing this error on my deployer(standalone) instance. The configs appear the same. Any help is greatly appreciated. Thanks  
Hi, I'm trying to pull in Windows Event logs from the Windows PowerShell path. This path includes 800s, which I've seen in event viewer so I know they're generated and stored here. I just can't see... See more...
Hi, I'm trying to pull in Windows Event logs from the Windows PowerShell path. This path includes 800s, which I've seen in event viewer so I know they're generated and stored here. I just can't seem to pull anything and I don't see much help on the internet to pulling this path. This is my inputs.conf:  [WinEventLog://Windows PowerShell/] disabled=0   Note: This is different from the other PowerShell path where I get my 4103 and 4104 codes:  [WinEventLog://Microsoft-Windows-PowerShell/Operational] disabled=0   Any helps is appreciated. Thanks.
Hello,  I need your help please, I have two tables resulting from two searches and I need to join these two tables to make a cumulative bar chart according to date. My tables are  What... See more...
Hello,  I need your help please, I have two tables resulting from two searches and I need to join these two tables to make a cumulative bar chart according to date. My tables are  What I want to achieve is: Datum A1 A2 A3 A4 A5 A6 2022-02-08 5.7   3.7 1.9 4.56 90.3      
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.  Query: index="win event" host IN (USMDCKPAP30074) Event=6006 OR Event="6005" Ty... See more...
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.  Query: index="win event" host IN (USMDCKPAP30074) Event=6006 OR Event="6005" Type=Information | eval Uptime = if(Event=6005,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | table host Uptime Eg: 2022-31-01 10:00:42 2022-29-01 06:40:11 2022-27-01 12:55:56 After exporting : 8/1/2022 4:08 1/1/2022 4:03 2021-25-12 04:03:29 2021-18-12 04:02:54 2021-16-12 10:14:45 2021-16-12 10:08:21 11/12/2021 4:08 4/12/2021 4:11  
Hi All, We have a number of micro services with correlation id flowing across the request and responses. What i'm trying to do is to create a flow of request and response for 1 correlation id. Exam... See more...
Hi All, We have a number of micro services with correlation id flowing across the request and responses. What i'm trying to do is to create a flow of request and response for 1 correlation id. Example log correlation id time source message 123 12:00:00 Service A Enter service A 123 12:00:01 Service A Calling Service B 123 12:00:02 Service B Routing to Service C 123 12:00:03 Service C Result Found. Response User 1 123 12:00:04 Service B Using User 1 to find resource 123 12:00:05 Service B Resource Found. Calling Service D 123 12:00:06 Service D Sub-resource not found. Response: null 123 12:00:07 Service B Return result. Response User1, resource1 123 12:00:08 Service A Return User1, resource1   From the example log, i would like to be able to group Service A (12:00:00 -12:00:01) Service B (12:00:02) Service C (12:00:03) Service B (12:00:04 -12:00:05) Service D (12:00:06) Service B (12:00:07) Service A (12:00:08) What i'm trying to do right now is a simple event results first before going to any further fancy visualization. I tried using Transaction but i can't separate the source when there's a different call in between. Here's the query that i've tried   123 | eval _time=strptime(timegenerated,"%Y-%m-%dT%H:%M:%SZ") | sort - _time | transaction source   Any help is greatly appreciated. Thanks, Allen
Follow-on from my previous question . I ended up using a slightly different solution involving  match for the case criteria. Since the query inputs are being provided by token values to a Splunk St... See more...
Follow-on from my previous question . I ended up using a slightly different solution involving  match for the case criteria. Since the query inputs are being provided by token values to a Splunk Studio dashboard, I would not be able to properly break up and quote each term of a multi-value text input. By using match, I can just tell users to use | as a separator instead and run a search like: | eval state=case(match(foo, "^($foo_token$)$") AND match(bar, "^($bar_token$)$"), 1, NOT match(foo, "^($foo_token$)$") AND NOT match(bar, "^($bar_token$)$"), 2, 1=1, 0) However, the table cannot run this search. Even if both foo and bar have input values, the table shows "Waiting for input." If I escape the end-of-line match character like $$ or like \$, I see the same "Waiting for input." If I use only: | eval state=case(match(foo, "^($foo_token$)$"), 1, 1=1, 0) The search runs and produces expected results, so it seems to be a problem with having 2 or more $s. I want to search for whole-line values of fields only. How can I do this?
Hi Splunk Community,  I need some help with the following query: (index=* OR index=*) (sourcetype=A OR sourcetype=C OR sourcetype=D) (a_location=* OR b_location=* OR c_location=* OR d_location=*) (... See more...
Hi Splunk Community,  I need some help with the following query: (index=* OR index=*) (sourcetype=A OR sourcetype=C OR sourcetype=D) (a_location=* OR b_location=* OR c_location=* OR d_location=*) (a_location!=*S1* OR b_location!=*S1* OR c_location!=*S1* OR d_location!=*S1*) User!=basketball UserGroup!=baseball | eval Interface_card=mvappend(a_location,b_location,c_location,d_location) | mvexpand Interface_card | bin span=1d _time | stats sum(TCDuration) as TCDuration by _time Interface_card | eval TCDuration=TCDuration/1000 | eval Utilization=round(((TCDuration/86400)*100),1) | eval Utilization=if(Utilization >100, 100, Utilization) | fields - TCDuration | timechart eval(round(avg(Utilization),1)) by Interface_card limit=0 1. How can I optimized it 2. how can I filter only Utilization between 0-40 and/or 70-99 or any other limit I want to filter... Appreciate any help thank you  
When on certain pages, my search head says "Loading" and this never goes away. This also causes weird behaviour and javascript errors when viewing certain Splunk administrative pages, such as App ins... See more...
When on certain pages, my search head says "Loading" and this never goes away. This also causes weird behaviour and javascript errors when viewing certain Splunk administrative pages, such as App installation or LDAP settings. Looking into the code, I find the following:   <script type="text/javascript"> //<![CDATA[ this.messenger = Splunk.Messenger.System.getInstance(); // a misconfigured hierarchy can often derail the module loading, so the 'Loading' string can get stuck there. $("#loading").hide(); //]]> </script>   It says: "a misconfigured hierarchy can often derail the module loading, so the 'Loading' string can get stuck there." What does this mean? How can I see what is the misconfiguration in the hierarchy? What can I look at to see failed module loading? I'm sure that there is a Splunk app that is causing this but, short of trial and error, I am unsure on what could be causing this. Any help would be very useful! Thanks!
Hi there- I have a simple dashboard that allows me to see growth around the number of Live / Archived accounts we manage in Google. We currently have a daily pull of the directory service into Splu... See more...
Hi there- I have a simple dashboard that allows me to see growth around the number of Live / Archived accounts we manage in Google. We currently have a daily pull of the directory service into Splunk, which allows for the following query to be run (I have a few like this with Archived / Live being the adjustments I make): index="google" sourcetype="*directory*" "emails{}.address"="*@mydomain.com"  | timechart count by archived span=1d cont=FALSE In the last week or so we have had some issues in that sometimes we get two or three directory pulls into Splunk, which results in the graph displaying double / triple the count of data (see attached image) My question is as follows: Are there any additional variables I can add into my query to ONLY interpret one data pull per 24 hour period?    This will allow for consistent reporting in the face of inconsistent directory pulls into Splunk. I have poked around a bit with Timechart but feel I perhaps I should be using a stats command instead...?  any direction on which approach to use is appreciated!
Hi, all! I have a table I want to be like this How can I do this? Can help me?  
I need to get the list of .conf files. On running my below Splunk Query, "| rest /services/configs/conf-props" it returns the conf objects, but I need to find the .conf files instead of objec... See more...
I need to get the list of .conf files. On running my below Splunk Query, "| rest /services/configs/conf-props" it returns the conf objects, but I need to find the .conf files instead of objects. Any help would be appreciated! Thanks!
We have been bringing our Splunk 8.2.2.1 Enterprise stand-alone server up with SAML SSO using our windows.net connector for AD integration. This is working on our test machine without issue.  (We h... See more...
We have been bringing our Splunk 8.2.2.1 Enterprise stand-alone server up with SAML SSO using our windows.net connector for AD integration. This is working on our test machine without issue.  (We had issues getting it 100%, but 'tis the way with SAML and SSO).  We have attempted to replicate it to our production server, and keep getting the following error, which is not helpful: 02-08-2022 17:40:19.233 -0500 INFO loader [9134 MainThread] - SAML cert db registration with KVStore successful 02-08-2022 17:39:01.352 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder destructor for domain: samlv2, authType: SAML, refCnt: 2 02-08-2022 17:39:01.352 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder destructor for domain: samlv2, authType: SAML, refCnt: 3 02-08-2022 17:39:01.352 -0500 ERROR UserManagerPro [7659 SchedulerThread] - SAML config is invalid, Reconfigure it. 02-08-2022 17:39:01.352 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder destructor for domain: samlv2, authType: SAML, refCnt: 4 02-08-2022 17:39:01.352 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder constructor for domain: samlv2, authType: SAML, refCnt: 4 02-08-2022 17:39:01.352 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder constructor for domain: samlv2, authType: SAML, refCnt: 3 02-08-2022 17:39:01.352 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder constructor for domain: samlv2, authType: SAML, refCnt: 2 02-08-2022 17:39:01.191 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder destructor for domain: samlv2, authType: SAML, refCnt: 2 02-08-2022 17:39:01.191 -0500 DEBUG AuthenticationManager [7659 SchedulerThread] - AuthProviderHolder destructor for domain: samlv2, authType: SAML, refCnt: 3 02-08-2022 17:39:01.191 -0500 ERROR UserManagerPro [7659 SchedulerThread] - SAML config is invalid, Reconfigure it. We have enabled debug logging on the user manager and authentication threads, but this isn't adding any more detail. authentication.conf looks like this: (altered just the domains, names, GUIDs, etc.)   [authentication] #authSettings = samlv2 #authType = SAML authSettings = splunk authType = splunk [authenticationResponseAttrMap_SAML] mail = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress realName = http://schemas.microsoft.com/identity/claims/displayname role = http://schemas.microsoft.com/ws/2008/06/identity/claims/groups [samlv2] entityId = splunkEntityProdId fqdn = https://mydomain.com idpCertPath = idpCert.pem idpSLOUrl = https://login.microsoftonline.com/8a4925a9-fd8e-4866-b31c-f/saml2 idpSSOUrl = https://login.microsoftonline.com/8a4925a9-fd8e-4866-b31c-f/saml2 inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://sts.windows.net/8a4925a9-fd8e-4866-b31c-f/ lockRoleToFullDN = true nameIdFormat = urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress redirectPort = 8000 replicateCertificates = true signAuthnRequest = false signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST ssoBinding = HTTP-POST allowPartialSignatures = true [roleMap_SAML] admin = b5ba2c9d-6b90-4e48-8746-16d52 GSP-Splunk-Prod-Admin = b5ba2c9d-6b90-4e48-8746-16d52 GSP-Splunk-Prod-Other = 5c89568d-d73f-4022-92b2-f9768 GSP-Splunk-Prod-PowerUser = 6f21a008-d90c-434c-aa48-7ae08 GSP-Splunk-Prod-User = 30a721f4-0281-410f-8e3b-7f9c power = 6f21a008-d90c-434c-aa48-7ae08 user = 30a721f4-0281-410f-8e3b-7f9cc7 [userToRoleMap_SAML] johndoe@none.com = admin;GSP-Splunk-Prod-Admin::John Doe::johndoe@none.com [splunk_auth] constantLoginTime = 0.000 enablePasswordHistory = 1 expireAlertDays = 15 expirePasswordDays = 90 expireUserAccounts = 1 forceWeakPasswordChange = 0 lockoutAttempts = 7 lockoutMins = 30 lockoutThresholdMins = 5 lockoutUsers = 1 minPasswordDigit = 0 minPasswordLength = 8 minPasswordLowercase = 0 minPasswordSpecial = 0 minPasswordUppercase = 0 passwordHistoryCount = 3 verboseLoginFailMsg = 1  
I have 3 different sourcetype like Result , Node and error under same index. Result has id , model Node has address, id, resultid (which is key to id in result) Error  has err_msg, id, nid (whi... See more...
I have 3 different sourcetype like Result , Node and error under same index. Result has id , model Node has address, id, resultid (which is key to id in result) Error  has err_msg, id, nid (which is key to id in Node) I want to export a result with stats count of err_msg by Node and model. I tried with joins and subquery with IN operator from other query but no luck.   index= index1 sourcetype = Node [ search  index= index1 sourcetype = Error  | stats count by err_msg ] | stats count by id,err_msg
Hi, Let me start by saying that that i have a very limit knowledge about Splunk, its normally not my area of expertise. I made some performance investigations and accidently came across some inte... See more...
Hi, Let me start by saying that that i have a very limit knowledge about Splunk, its normally not my area of expertise. I made some performance investigations and accidently came across some interesting finding for Splunk. With one of the tool i'm using i could see that splunkd.exe had a very high latency towards our Splunk servers, 700ms-1000ms and more than 20% failed connections. I cant really verify those numbers, because if i do a normal ping towards the same servers, i get around 20ms, so its only splunkd.exe that have the high latency. I was wondering if anyone could point me in the right direction, where to look, to get an understanding of this "issue". outputs.conf [tcpout] defaultGroup = primary_heavy_forwarders maxQueueSize = 7MB useACK = true forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:primary_heavy_forwarders] server = NAME1:9997, NAME2:9997, NAME3.com:9997 #clientCert = $SPLUNK_HOME/etc/auth/server.pem #sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem #sslPassword = ******** #sslVerifyServerCert = true splunkd.log (part of the log file for today, from a client) 02-09-2022 10:48:20.841 +0100 INFO ApplicationLicense - app license disabled by conf setting. 02-09-2022 10:48:26.777 +0100 WARN TcpOutputProc - Cooked connection to ip=IP1:9997 timed out 02-09-2022 10:48:50.836 +0100 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-09-2022 10:48:56.568 +0100 WARN TcpOutputProc - Cooked connection to ip=IP1:9997 timed out 02-09-2022 10:49:09.291 +0100 INFO TcpOutputProc - Closing stream for idx=IP2:9997 02-09-2022 10:49:09.291 +0100 INFO TcpOutputProc - Connected to idx=IP1:9997, pset=0, reuse=0. using ACK. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Read error. En befintlig anslutning tvingades att stänga av fjärrvärddatorn. 02-09-2022 10:50:17.238 +0100 INFO TcpOutputProc - Connection to IP2:9997 closed. Read error. En befintlig anslutning tvingades att stänga av fjärrvärddatorn. 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log|host::807|splunkd|2728, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log|host::807|splunkd|2727, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log|host::807|splunkd|2721, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log|host::807|splunkd|2713, streamId=0, offset=0 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Possible duplication of events with channel=source::WinEventLog:Security|host::807|XmlWinEventLog:Security|, streamId=3264402492634740844, offset=200186306 on host=IP2:9997 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP1:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP1:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP2:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP2:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Applying quarantine to ip=IP2 port=9997 _numberOfFailures=2 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP3:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP3:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputFd - Connect to IP1:9997 failed. En socketåtgärd försökte utföras till ett nätverk som inte går att kontakta. 02-09-2022 10:50:17.238 +0100 ERROR TcpOutputFd - Connection to host=IP1:9997 failed 02-09-2022 10:50:17.238 +0100 WARN TcpOutputProc - Applying quarantine to ip=IP1 port=9997 _numberOfFailures=2 limits.conf # [thruput] # maxKBps = 0 The only thing i tested by myself so far is to add the servers to the host file, without any success. I also noticed that from the outputs.conf its DNS names and in the log file its IP, but maybe that does not matter. Any help would be much appreciated,  Thanks in advance
I am trying to bring future data into a dashboard which include events from this week's Friday 17:00 UTC until next week same day same hour, but I don't get any valid results whatsoever, in 2 te... See more...
I am trying to bring future data into a dashboard which include events from this week's Friday 17:00 UTC until next week same day same hour, but I don't get any valid results whatsoever, in 2 test cases: I have declared 4 tokens: <eval token="earliest_default">relative_time(now() , "+1w@w5+17h")</eval> <eval token="latest_default">relative_time(now() , "+7d@d+17h")</eval> <eval token="time_from">relative_time(now(),"+1w@w+17h")</eval> <eval token="time_to">relative_time(now(),"+7d@d+17h")</eval> And queried a search which evaluates the time for a week's span: (TEST1) <search> <query>index="xxx_index" | head 1 | eval thisFriday17 =if( strftime(now(),"%w")=="5",relative_time(now() , "+1w@w+17h"), relative_time(now() , "+7d@d+17h")) | eval nextFriday17 = relative_time(thisFriday17 , "+7d@d+17h") | eval filterFrom = case( "$xxx_presetTime$"=="This Friday 17:00 UTC - Next Week Friday 17:00 UTC", thisFriday17 , "$xxx_presetTime$"=="custom" , $time_from$ ) | eval filterTo = case( "$xxx_presetTime$"=="This Friday 17:00 UTC - Next Week Friday 17:00 UTC", nextFriday17, "$xxx_presetTime$"=="custom" , $time_to$ ) | eval filterFrom_label = strftime(filterFrom,"%d-%m-%Y- %H:%M:%S") | eval filterTo_label = strftime(filterTo,"%d-%m-%Y- %H:%M:%S") | table filterFrom , filterTo,filterFrom_label,filterTo_label</query> <earliest></earliest> <latest></latest> <done> <set token="from_drill">$result.filterFrom$</set> <set token="to_drill">$result.filterTo$</set> <set token="filterFrom_label">$result.filterFrom_label$</set> <set token="filterTo_label">$result.filterTo_label$</set> </done> </search> The main issue is that no data is displayed even if it should be. Changing the span ruins the results, meaning that Splunk brings data from the LAST Friday until THIS Friday, not from THIS Friday to the upcoming one or 2 weeks Working in the Advanced Time Span filter and selecting the above throws "The earliest time is invalid" TEST2 A working code I brought up is the following, but the results captured are from the LAST Friday until THIS Friday, not from THIS Friday to the upcoming 2 weeks. Reducing the time span from the below breaks the code. This broke me too. | eval thisFriday17 =if( strftime(now(),"%w")=="5",relative_time(now() , "@w5+17h"), relative_time(now() , "+1w@w5+17h")) | eval next2Friday17 =if( strftime(now(),"%w")=="5",relative_time(now() , "@w5+14d+17h"), relative_time(now() , "+1w@w5+14d+17h")) | eval filterFrom = case( "$xxx_presetTime$"=="This Friday 17:00 UTC - Next 2 Weeks Friday 17:00 UTC", thisFriday17 , "$xxx_presetTime$"=="custom" , $time_from$ ) | eval filterTo = case( "$xxxx_presetTime$"=="This Friday 17:00 UTC - Next 2 Weeks Friday 17:00 UTC", next2Friday17, "$xxx_presetTime$"=="custom" , $time_to$ ) | eval filterFrom_label = strftime(filterFrom,"%d-%m-%Y- %H:%M:%S") | eval filterTo_label = strftime(filterTo,"%d-%m-%Y- %H:%M:%S") | table filterFrom , filterTo,filterFrom_label,filterTo_label I must mention that the user is not able to change the Preset Time Span I am forcing: <input type="dropdown" token="xxx_presetTime" searchWhenChanged="true"> <label>Preset Time Span</label> <choice value="This Friday 17:00 UTC - Next Week Friday 17:00 UTC">This Friday 17:00 UTC - Next Week Friday 17:00 UTC</choice> Hope I am being clear in exposing my issue. Thanks in advance for your help!
We have event having field "ip_client" and have lookup file i.e(F5_IPS_Exclusion.csv) having field "F5_Exclusion_IP" as mentioned below. LOOKUP |input lookup F5_IPS_Exclusion.csv F5_Exclusion_I... See more...
We have event having field "ip_client" and have lookup file i.e(F5_IPS_Exclusion.csv) having field "F5_Exclusion_IP" as mentioned below. LOOKUP |input lookup F5_IPS_Exclusion.csv F5_Exclusion_IPS 192.203.194.133 192.203.194.137 202.128.98.209 202.128.98.210 Note: lookup file contains duplicate value too. Require search query which will return events whose "ip_clent" field value doesn't match with "F5_Exclusion_IPS" field value in lookup file.
Hello, I want to calculate the days in difference like below like future days should be in positive and past days should be negative i tried eval diff=(now()-_time) and did strftime(diff,"%D") bu... See more...
Hello, I want to calculate the days in difference like below like future days should be in positive and past days should be negative i tried eval diff=(now()-_time) and did strftime(diff,"%D") but here all are in positive days. i want the past days in negative Date Difference in Days   04-02-2022 -5   05-02-2022 -4   06-02-2022 -3   07-02-2022 -2   08-02-2022 -1   09-02-2022 0 Today Date 10-02-2022 1   11-02-2022 2   12-02-2022 3   13-02-2022 4  
Point 1: I need to use the logs only specific timings to bring the output (timings like 7am to 8pm weekdays only that to date is 1st Jan to 17th Jan and 31st jan)... Point 2: We are receiving a l... See more...
Point 1: I need to use the logs only specific timings to bring the output (timings like 7am to 8pm weekdays only that to date is 1st Jan to 17th Jan and 31st jan)... Point 2: We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address. In this field ,we have mutiple IP's and event is indexing for each 5 min of interval like(Ping success for Ip_Address=10.10.101.10 OR Ping failed for Ip_Address=10.10.101.10).   FYI, if I am getting events like(1:00pm ping failed and 1:05pm ping success) in this case we are not considering as failed percentage. So, basically if count of failure is more than one time(means Continuously like 1:00pm ping failed and 1:05pm ping failed ) then only it will be considered as failure. I do not want all IP address data. Only data need certain IP Addresses are required at the following timings...we need failed and success percentage within the time to mentioned Ip's in our CSV file final output like IP_Address Failed% Success% 1.1.1.1.          0.5.          99.5
2022-02-03 12:07:12 [machine-run-00000-hit-000000-step-00000] [[Card Onboarding] CCC Capture - Logging Framework] [Card Onboarding business process v3.0.0_logging (CardOnboardingCPSCapture)] [CC00] C... See more...
2022-02-03 12:07:12 [machine-run-00000-hit-000000-step-00000] [[Card Onboarding] CCC Capture - Logging Framework] [Card Onboarding business process v3.0.0_logging (CardOnboardingCPSCapture)] [CC00] CardOnboardingCPSCaptureRobot [ERROR] Error CPS NOT AVAILABLE on CPS screen UNKNOWN   Need to extract the above highlighted fields please 2022-02-03 12:07:12 - Date [Card Onboarding] CCC Capture - Logging Framework - Process Card Onboarding business process v3.0.0_logging (CardOnboardingCPSCapture) - Step CC00 - User ERROR - Log_Level  
Hi I am trying to explore more ways to check if business email compromise is being happening in our organization, just before the end user recognises it. i have a list of domains that we usually ... See more...
Hi I am trying to explore more ways to check if business email compromise is being happening in our organization, just before the end user recognises it. i have a list of domains that we usually communicate with,  there are around 490 domains I have listed and added to a csv file.  there is an index which is updated in realtime which have logs from mimecast. I would like to list out domains which are trying to establish email communication with our organization which are not there in the csv file. so if a non matching domain is emailing us, it should display in a dashboard. is this possible?