All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

table returns duplicates for extracted Fields that are not Selected fields In the following image, host is a Selected Field which has a single entry, where as the other Fields are automatically extr... See more...
table returns duplicates for extracted Fields that are not Selected fields In the following image, host is a Selected Field which has a single entry, where as the other Fields are automatically extracted (from the events, which is the output of Docker inspect). These extracted fields as seen in the attached image, has two entries. There are two events, and for each event State.Running shows 'true' twice instead of a single 'true'. Why is this happening? The values are duplicated
Hi Splunk Community, I am seeking assistance on what should be a relatively simple task - to drop/filter particular events on a heavy forwarder node using props.conf/transforms.conf. I have successf... See more...
Hi Splunk Community, I am seeking assistance on what should be a relatively simple task - to drop/filter particular events on a heavy forwarder node using props.conf/transforms.conf. I have successfully implemented filtering/dropping events within the environment many times before so I am surprised how how difficult to make work this particular transform/prop is. It is not working as expected despite the string matching regex working successfully when testing on the Splunk GUI, the Splunk pcregextest CLI tool and regex101.com. The log events are from a Cisco Firepower firewall. They are sent via syslog from the firewall to a Linux syslog server where they are written to disk and then picked up by the SplunkUniversalForwarder, sent to a Heavy Forwarder node before being sent to an Index node. I have tried many permutations of the props.conf/transforms.conf below including sending events which do NOT contain the specified words to the nullQueue or sending everything to the nullQueue then filtering events which DO match the regex to the indexQueue, various changes to the regex etc. the result is that the Splunk index node (and the resultant index) will either receive every event or no events at all (depending on the test). Any help or tips to assist debugging this problem will be greatly appreciated. Thanks,   What I am trying to achieve with props.conf/transforms.conf? - Drop any log events which do not contain the word "URLSICategory" or "DNSSICategory" or "IPReputationSICategory"   inputs.conf (Linux syslog host - universal forwarder) [monitor:///var/log/firewall/firewall_test.log] disabled=false index=ngfw_security_intelligence sourcetype=security_syslog   transforms.conf (heavy forwarder) [allsetnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [ngfw_si_events_whitelist] REGEX = ((?:URLSICategory|DNSSICategory|IPReputationSICategory)) DEST_KEY = queue FORMAT = indexQueue   props.conf (heavy forwarder) [ngfw_security_intelligence] TRANSFORMS-ngfw-drop = allsetnull, ngfw_si_events_whitelist   Other regex tried: - Match lines which do NOT contain the words (then nullQueue them) (?s)^((?!URLSICategory|DNSSICategory|IPReputationSICategory).)*$ - Match lines which DO contain the words (then indexQueue them) ((?:URLSICategory|DNSSICategory|IPReputationSICategory)) (?:URLSICategory|DNSSICategory|IPReputationSICategory) (URLSICategory|DNSSICategory|IPReputationSICategory) URLSICategory|DNSSICategory|IPReputationSICategory   Testing done (GUI): index=ngfw_security_intelligence | regex _raw="^((?!URLSICategory|DNSSICategory|IPReputationSICategory).)*$" index=ngfw_security_intelligence | regex _raw="(?s)^((?!URLSICategory:|DNSSICategory:|IPReputationSICategory:).)*$"   Testing done (CLI): ./splunk cmd pcregextest mregex="^((?!URLSICategory|DNSSICategory|IPReputationSICategory).)*$" test_str="Destination, IPReputationSICategory: Global-Blacklist_1"   Testing done (regex101.com): https://regex101.com/r/MDQqBx/1 https://regex101.com/r/rLDxHr/1   The log data (anonymized for this post): Rcvd:2021-04-15T11:17:46.673993+10:00 From:firewall-1.site.com Time:2021-04-15T01:17:45 Host:firewall-1 Pri:alert.info Msg: %FTD-6-430002: EventPriority: High, DeviceUUID: 00000000-0000-0000-0000-000000000001, InstanceID: 16, FirstPacketSecond: 2021-04-15T01:17:45Z, ConnectionID: 35916, AccessControlRuleAction: Block, AccessControlRuleReason: IP Block, SrcIP: 10.0.0.8, DstIP: 1.2.3.9, SrcPort: 16403, DstPort: 16386, Protocol: udp, IngressInterface: Inside, EgressInterface: Outside, IngressZone: INSIDE, EgressZone: OUTSIDE, IngressVRF: Global, EgressVRF: Global, ACPolicy: ACP-XX-20210329, Prefilter Policy: PFP-XX-20210329, InitiatorPackets: 1, ResponderPackets: 0, InitiatorBytes: 58, ResponderBytes: 0, NAPPolicy: No Rules Active, SecIntMatchingIP: Destination, IPReputationSICategory: Global-Blacklist_1 Rcvd:2021-04-15T11:17:49.924536+10:00 From:firewall-1.site.com Time:2021-04-15T01:17:49 Host:firewall-1 Pri:alert.info Msg: %FTD-6-430003: EventPriority: Low, DeviceUUID: 00000000-0000-0000-0000-000000000001, FirstPacketSecond: 2021-04-15T01:17:49Z, ConnectionID: 0, AccessControlRuleAction: Allow, SrcIP: 10.0.0.1, DstIP: 1.2.3.4, SrcPort: 54102, DstPort: 443, Protocol: tcp, IngressInterface: Inside, EgressInterface: Outside, IngressZone: INSIDE, EgressZone: OUTSIDE, IngressVRF: Global, EgressVRF: Global, ACPolicy: ACP-XX-20210329, AccessControlRuleName: From-IPv4-allowed-users, Prefilter Policy: PFP-XX-20210329, User: Not Found, ConnectionDuration: 0, InitiatorPackets: 12, ResponderPackets: 15, InitiatorBytes: 1419, ResponderBytes: 12575, NAPPolicy: No Rules Active Rcvd:2021-04-15T12:53:40.111154+10:00 From:firewall-1.site.com Time:2021-04-15T02:53:39 Host:firewall-1 Pri:alert.info Msg: %FTD-6-430003: EventPriority: Low, DeviceUUID: 00000000-0000-0000-0000-000000000001, FirstPacketSecond: 2021-04-15T02:53:39Z, ConnectionID: 0, AccessControlRuleAction: Allow, SrcIP: 10.0.0.2, DstIP: 1.2.3.5, SrcPort: 48012, DstPort: 443, Protocol: tcp, IngressInterface: Inside, EgressInterface: Outside, IngressZone: INSIDE, EgressZone: OUTSIDE, IngressVRF: Global, EgressVRF: Global, ACPolicy: ACP-XX-20210329, AccessControlRuleName: From-IPv4-allowed-users, Prefilter Policy: PFP-XX-20210329, User: Not Found, ConnectionDuration: 0, InitiatorPackets: 10, ResponderPackets: 15, InitiatorBytes: 1678, ResponderBytes: 12575, NAPPolicy: No Rules Active Rcvd:2021-04-15T12:53:40.112896+10:00 From:firewall-1.site.com Time:2021-04-15T02:53:40 Host:firewall-1 Pri:alert.info Msg: %FTD-6-430003: EventPriority: Low, DeviceUUID: 00000000-0000-0000-0000-000000000001, FirstPacketSecond: 2021-04-15T02:53:40Z, ConnectionID: 0, AccessControlRuleAction: Allow, SrcIP: 10.0.0.3, DstIP: 1.2.3.6, SrcPort: 48019, DstPort: 443, Protocol: tcp, IngressInterface: Inside, EgressInterface: Outside, IngressZone: INSIDE, EgressZone: OUTSIDE, IngressVRF: Global, EgressVRF: Global, ACPolicy: ACP-XX-20210329, AccessControlRuleName: From-IPv4-allowed-users, Prefilter Policy: PFP-XX-20210329, User: Not Found, ConnectionDuration: 0, InitiatorPackets: 11, ResponderPackets: 15, InitiatorBytes: 1678, ResponderBytes: 12575, NAPPolicy: No Rules Active Rcvd:2021-04-15T12:53:40.123993+10:00 From:firewall-1.site.com Time:2021-04-15T01:17:45 Host:firewall-1 Pri:alert.info Msg: %FTD-6-430002: EventPriority: High, DeviceUUID: 00000000-0000-0000-0000-000000000002, InstanceID: 16, FirstPacketSecond: 2021-04-15T01:17:45Z, ConnectionID: 35916, AccessControlRuleAction: Block, AccessControlRuleReason: IP Block, SrcIP: 10.0.0.4, DstIP: 1.2.3.7, SrcPort: 16403, DstPort: 16386, Protocol: udp, IngressInterface: Inside, EgressInterface: Outside, IngressZone: INSIDE, EgressZone: OUTSIDE, IngressVRF: Global, EgressVRF: Global, ACPolicy: ACP-XX-20210329, Prefilter Policy: PFP-XX-20210329, InitiatorPackets: 1, ResponderPackets: 0, InitiatorBytes: 58, ResponderBytes: 0, NAPPolicy: No Rules Active, SecIntMatchingIP: Destination, IPReputationSICategory: Global-Blacklist_1 Rcvd:2021-04-15T12:53:40.130404+10:00 From:firewall-2.site.com Time:2021-04-15T12:53:40 Host:firewall-2 Pri:local4.err Msg::Apr 15 02:53:40 UTC: %FTD-session-3-106014: Deny inbound icmp src Inside:10.1.0.1 dst nlp_int_tap:169.254.1.2 (type 3, code 3)
We have a situation where we need to have General day to day admin Role and an Elevated admin role. Day to Day will allow for most things, but should not allow for creation / deletion of local accou... See more...
We have a situation where we need to have General day to day admin Role and an Elevated admin role. Day to Day will allow for most things, but should not allow for creation / deletion of local accounts. Setting this up is easy, only issues is, even though I have admin all objects, I'm not able to see any other users details - i.e. I can't see any other users when I try to re-assign objects, I can't see any other users details when running rest calls - to see how they are set-up etc. Just wondering if anyone else has run into anything like this and managed to figure out a way to work around it?
Hello folks I have a search in which I table.  Here is a snippet of the results   ObjectDN _time cn=Jane Fonda,OU=Blue,OU=Pad,OU=Circle Team,DC=circle,DC=net 2021-04-22 23:48:36 CN=Matt Cruz... See more...
Hello folks I have a search in which I table.  Here is a snippet of the results   ObjectDN _time cn=Jane Fonda,OU=Blue,OU=Pad,OU=Circle Team,DC=circle,DC=net 2021-04-22 23:48:36 CN=Matt Cruz,OU=Blue,OU=Pad2,OU=Circle Team,DC=circle,DC=net 2021-04-22 01:07:43   If I want to extract the names contained in the CN to output to a column on it's own how would I do that?  
Hi Splunkers! Im running a very simple query to get the subject of all the emails we are getting. Something like this: index=o365_email_data |table Subject Results look like this: Subje... See more...
Hi Splunkers! Im running a very simple query to get the subject of all the emails we are getting. Something like this: index=o365_email_data |table Subject Results look like this: Subject Ticket ID: INC3333333 - Prio: 1 - High- "Description of the Incident" - has been updated Ticket ID: INC1111111 - Prio: 4 - Low- "Description of the Incident" - has been resolved Ticket ID: INC2222222 - Prio: 3 - Normal - "Description of the Incident" - has been created What I would like to accomplish is to be able to parse certain parts of that Subject and fill a table like this: Ticket Priority Description Status INC3333333 High Description of the Incident updated INC1111111 Low Description of the Incident resolved INC2222222 Normal Description of the Incident created   This resembles a lot like the "Text to Column" function in Excel. Im completely lost on how I can achieve this. Many thanks!
Hi, I am facing a strange issue. The HEC setup to send container logs to splunk intermittently posts below error. There are logs indexed on splunk too, but not sure why this warning is being sent ea... See more...
Hi, I am facing a strange issue. The HEC setup to send container logs to splunk intermittently posts below error. There are logs indexed on splunk too, but not sure why this warning is being sent each time.     dockerd: time="2021-04-26T15:57:00.437718332-07:00" level=warning msg="Error while sending logs" error="Post https://<ip>:8088/services/collector/event/1.0: EOF" module=logger/splunk dockerd: time="2021-04-26T15:57:00.477768784-07:00" level=warning msg="Error while sending logs" error="Post https://<ip>:8088/services/collector/event/1.0: EOF" module=logger/splunk dockerd: time="2021-04-26T15:57:00.490481248-07:00" level=warning msg="Error while sending logs" error="Post https://<ip>:8088/services/collector/event/1.0: EOF" module=logger/splunk dockerd: time="2021-04-26T15:57:00.521367609-07:00" level=warning msg="Error while sending logs" error="Post https://<ip>:8088/services/collector/event/1.0: EOF" module=logger/splunk dockerd: time="2021-04-26T15:57:00.829805769-07:00" level=warning msg="Error while sending logs" error="Post https://<ip>:8088/services/collector/event/1.0: EOF" module=logger/splunk       Can you please let me know what is the issue? According to docker guys, this will cause network clogs in the docker swarm and will affect running containers.   Thanks Shashi 
Hello all, I'm currently working to automate alerts from Splunk to Jira utilizing the Atlassian JIRA Issue Alerts add-on for Splunk. We are running into the error "Unexpected error: No priority in m... See more...
Hello all, I'm currently working to automate alerts from Splunk to Jira utilizing the Atlassian JIRA Issue Alerts add-on for Splunk. We are running into the error "Unexpected error: No priority in meta." despite setting various priorities like P2 and P3. Any guidance here is greatly appreciated!  
Here is my query   | search "Some operation:*" | rex field=message "Some operation: (?<operation>\w+), .* for correlationId: (?<correlationid>.*)" | reverse | stats first(_time) as Start, last... See more...
Here is my query   | search "Some operation:*" | rex field=message "Some operation: (?<operation>\w+), .* for correlationId: (?<correlationid>.*)" | reverse | stats first(_time) as Start, last(_time) as End by correlationid, operation | eval time=_time | eval duration=End-Start     I am getting table of data like correlationid, operation, start, end, duration 09360e85-c4af-4e1e-896a-be626c1a9cdd DRAFT 1619423349.842 1619423350.010 0.168 0957aa55-7c43-4a74-b50b-2b7f904cdf15 QUEUE_FOR_SUBMISSION 1619427837.169 1619427837.271 0.102 I want to show a bar chart with time (start time) in x-axis, and a bar for each operation duration. How can I do that ?
Hello People, how Do I modify the order in which a table is showing the rows? I have no intent in ordering in terms of columns values or in alphabetical order, I rather want to have the rows shown in... See more...
Hello People, how Do I modify the order in which a table is showing the rows? I have no intent in ordering in terms of columns values or in alphabetical order, I rather want to have the rows shown in a organize format for example mi table looks like DETAIL  TOTAL A                458 B                  45 C                12 How can I have my table to look like this: DETAIL  TOTAL B                  45 A                458 C                12 These results come from a stats command .. I have even tried to change the order in the stats command but  that did not work
I'm trying to build a dashboard search that will allow someone to put in an ID and it will do a lookup on the FailureReason code that is part of the Cisco ISE authentication logs that will take into ... See more...
I'm trying to build a dashboard search that will allow someone to put in an ID and it will do a lookup on the FailureReason code that is part of the Cisco ISE authentication logs that will take into account different platforms like F5, Cisco 9K, Infoblox, etc...  The FailureReason code appears in all CSCOacs_failed_attempt logs but it's located in slightly different parts of the ISE log depending on the platform that the user is trying to login to. I have three different regex expressions, one that works on F5, one for Cisco 9K, and one for Infoblox. Is there a way that I can have the search look through the logs using the three different regex expressions and give me back the result for the one that gives a hit?
I have a Cisco ASA and my users VPN into it. I have created an alert based on the search below and it works. In the body of the alert it lists the "username = blah" (with the user's name"). Is it pos... See more...
I have a Cisco ASA and my users VPN into it. I have created an alert based on the search below and it works. In the body of the alert it lists the "username = blah" (with the user's name"). Is it possible to inject that username into the subject of the email alert? index=netfw sourcetype="cisco:asa" eventtype=cisco_vpn_end Cisco_ASA_message_id=113019
Hello, I need to create an alert to send me an email when the license is at 80% or 90% In Splunk clound there is no alert created that can be activated as if it happens in Splunk enterprise on premi... See more...
Hello, I need to create an alert to send me an email when the license is at 80% or 90% In Splunk clound there is no alert created that can be activated as if it happens in Splunk enterprise on premises, my license is 10GB    
I have a props.conf file on a heavy forwarder: [my:csv:report] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER = , FIELD_DELIMITER = , HEADER_FIELD_QUOTE = " FIELD_QUOT... See more...
I have a props.conf file on a heavy forwarder: [my:csv:report] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER = , FIELD_DELIMITER = , HEADER_FIELD_QUOTE = " FIELD_QUOTE = " SHOULD_LINEMERGE = false NO_BINARY_CHECK = 1 Then I added a test file:  "Date","User","Domain","Activity","Result","Method","Host","IP","MAC","Endpoint Host","Endpoint IP","Endpoint MAC","Device Type" "26 April 2021 00:00:29 EDT (GMT-0400)","username","example.org","Secondary Session","Successful","N/A","Hostname","10.11.22.33","00-11-22-33-44-55","N/A","N/A","N/A","N/A" I am able to search the data very shortly with the sourcetype. The data is indexed, the header line is not in the search results. But no field extraction either! Any hint is very much appreciated. I am running an indexer cluster. Everything is on Splunk Enterprise 7.3.4. Thanks!
After configuring the addon as specified in the document, the error logs are showing "log_error:309 | _Splunk_ Unable to obtain access token".  I have been unable to find what the root cause of this... See more...
After configuring the addon as specified in the document, the error logs are showing "log_error:309 | _Splunk_ Unable to obtain access token".  I have been unable to find what the root cause of this error might be.  The addon has been installed on the IDM.  Can anyone help me out with this issue?
Hello People,  Thank you so much for the amazing help you have provided me with in my last post... I have one final struggle to tackle this month in splunk and it is with regards to "How to count ... See more...
Hello People,  Thank you so much for the amazing help you have provided me with in my last post... I have one final struggle to tackle this month in splunk and it is with regards to "How to count events that retain the same ID or code of reference but can in time change values in a specefifiedl field" for this instances.. I work for a Hotel Company and a Booking reference f.i "YHDU-984" can have 4 differnet status values = BOOKED, PAID, TRAVELED, OK Whenever a custumer maked a reservacion the status is BOOKED, and they pay it out is PAID, whenever they traveled to the destination is TRAVELED and when they arrieved at our hotel is changed to OK... so a Booking reference may have all of these status values or only  some of them... each new change in status will be recorded with the DATE_TIME and every record will also show the DESTINATION (city) and HOTEL_NAME. It will be a lot more usefull to me that instead of counting by BOOKING_REF how manny events there are in the STATUS="OK" and so on.... ... I wanna be able to  count the number of BOOKING_REF in each STATUS taking into account the very last STATUS each BOOKING_REF has currently, I'm sorry Im not the best with words so here is an example: lest say I can obtain this tables: BOOKING_REF CLIENT STATUS HYH89 ADAM BOOKED HD983 BOB BOOKED XUUE8 CHARLES BOOKED XKSIU8 JAMES BOOKED XPPP4 DINA BOOKED YHUO1 TINA BOOKED   and when I look for STATUS PAID i get this BOOKING_REF CLIENT STATUS HYH89 ADAM PAID HD983 BOB PAID XUUE8 CHARLES PAID XKSIU8 JAMES PAID   and when I look for STATUS TRAVELED i get this BOOKING_REF CLIENT STATUS HYH89 ADAM TRAVELED HD983 BOB TRAVELED   and when I look for STATUS OK I get this   BOOKING_REF CLIENT STATUS HD983 BOB OK   if I use the stats command to count each status I get something like this: STATUS count BOOKED 6 PAID 4 TRAVELED 2 OK 1   but for my Boss (which is not a friendly person... it is confusing to interpret) so If I was able to count by the very last event or in other words the "current" status my result should look something like this: STATUS count BOOKED 2 PAID 2 TRAVELED 1 OK 1   and this is because out of the fist Bookings I can now identify which custumer have traveled and yet not arrieved at out hotel..and I can also see that Only one custumer has made it in the hotel thank you so much guys for your help..this will be the last time I will be bothering you with my posts this month I promise! Im sending you a lot of love   Kindly, Cindy
Greetings-- I am trying to set-up an WinEventLog inputs.conf whitelist for LAPS (EventCode=4662). These events have a similar structure in so far as:   Object: Object Server: DS Object Type: co... See more...
Greetings-- I am trying to set-up an WinEventLog inputs.conf whitelist for LAPS (EventCode=4662). These events have a similar structure in so far as:   Object: Object Server: DS Object Type: computer Object Name: CN=hostname   Which looks something like...   [WinEventLog://Security] disabled = 0 index = wineventlog checkpointInterval = 5 blacklist1 = EventCode="4660" Message="Process Name:\s+(?i)(?:[C-F]:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|optimizer|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi))\.exe)" blacklist2 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist3 = EventCode="4663" Message="Process Name:\s+(?i)(?:[C-F]:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|optimizer|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi))\.exe)" blacklist4 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist5 = EventCode="4688" Message="New Process Name: (?i)(?:[C-F]:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|optimizer|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi))\.exe)" whitelist = EventCode="4662" Message\="Object Type:(.*)computer([\s\S]*)Object Name:(.*)CN="   This seems to block all EventCode=4662 I have tried to do something to the effect of:   EventCode=%^4662$%   Per https://community.splunk.com/t5/Getting-Data-In/inputs-conf-Windows-event-whitelist/m-p/502836#M85657
Hi, I have question about search.log. I know I can find log records related to particular search in search.log using Job inspector (clicking on link to search.log in bottom of Job inspector). But my... See more...
Hi, I have question about search.log. I know I can find log records related to particular search in search.log using Job inspector (clicking on link to search.log in bottom of Job inspector). But my question is: is there any way how to get records related to particular search in past? Example: I made some search yesterday and today I would like to get all log records related to this search from search.log file. Is there any way how to do it? Thanks in advance for any info or hint. Best regards Lukas
Hi ,   I need to setup SSL for all my UF communicate securely both with my indexer and deployment server.  I have gone through the link: https://conf.splunk.com/session/2015/conf2015_DWaddle_Defen... See more...
Hi ,   I need to setup SSL for all my UF communicate securely both with my indexer and deployment server.  I have gone through the link: https://conf.splunk.com/session/2015/conf2015_DWaddle_DefensePointSecurity_deploying_SplunkSSLBestPractices.pdf , which is definitely useful. However need to know can I use the same certificate for outputs.conf and server.conf
I have two timecharts:   index=my_index sourcetype=my_sourcetype | where area="area1" | regex message="(?:(^Problem.*)|((?i).*Issue.*)|((?i).*Error.*))" | timechart count by message   and   ind... See more...
I have two timecharts:   index=my_index sourcetype=my_sourcetype | where area="area1" | regex message="(?:(^Problem.*)|((?i).*Issue.*)|((?i).*Error.*))" | timechart count by message   and   index=my_index sourcetype=my_sourcetype | where area="area2" | regex message="(?:(^Problem.*)|((?i).*Issue.*)|((?i).*Error.*))" | timechart count by message   The only thing that makes them different is that one is looking at logs where the value of area is area1, and the other is looking at area2. Rather than have two separate timecharts, I would like to have one timechart with a line for area1 and a line for area2, looking at the count of Issues for each over the given period of time. I do not need a span because the dashboard implements that for me with the time range selection feature. How could I go about this? I tried something like "timechart count by message by area"  but that does not work. Thank you.
I am trying to reduce my logs but would like to see the most logged strings. Is there a way of doing this? I have seen methods on specifying a string but haven't seen ones where one doesn't know the ... See more...
I am trying to reduce my logs but would like to see the most logged strings. Is there a way of doing this? I have seen methods on specifying a string but haven't seen ones where one doesn't know the strings.