All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a situation where I have a large dataset. This dataset has a field named A. This field is large and passing Splunk's default character limit of 10250 characters. Almost 70% of this datase... See more...
Hi, I have a situation where I have a large dataset. This dataset has a field named A. This field is large and passing Splunk's default character limit of 10250 characters. Almost 70% of this dataset is below default character limit. ~20% of field A's character length is around 30k. Rest of the 10% is very large. And these very large ones have characters in one field value over 3million and event size about 4MB a piece!!! Is there a way where I can create a logic in eval or rex to slice the field as below: Field A: Event 1 Master Event 1 Post Processing 1: Create event where characters FROM 1-10250 Event 2 Post Processing 2: Create event where characters FROM 10251-......and so on until it completes. There is a lot of rule matching that needs to happen in the work I have to do based on requirements. This rule matching looks for specific string in the field's text and outputs specific values. This rule matching works great so far except the issue is when field's values exceeds the character limit, Splunk ignores and I can not match after that. Not only this, Splunk's auto extraction does not extract any fields AFTER it stops because of character limit for that larger values in field's text. The dataset is ingested from MS SQL Server vis DBConnect on Splunk side. I also thought about using Python to pre-process the data but that adds complexity to the whole picture. Trying to keep it simple. Thanks in-advance!!!
I can't seem to get my Watchguard Firebox integrated with our Splunk Enterprise.  I have followed the integration guide to a T.  Has anyone had to use any tricks to get their data into Splunk from th... See more...
I can't seem to get my Watchguard Firebox integrated with our Splunk Enterprise.  I have followed the integration guide to a T.  Has anyone had to use any tricks to get their data into Splunk from their Firebox? 
Hello team ,   I am having one event in which single field have multiple value like provided below:   {"body":{"records": [{ "category": "AzureFirewallNetworkRule", "time": "2021-04-26T13:13:37.0... See more...
Hello team ,   I am having one event in which single field have multiple value like provided below:   {"body":{"records": [{ "category": "AzureFirewallNetworkRule", "time": "2021-04-26T13:13:37.0631470Z", "resourceId": "/SUBSCRIPTIONS/**********-*****-****-**/RESOURCEGROUPS/C-ABS-IT-SS-PROD-UKS-RG/PROVIDERS/MICROSOFT.NETWORK/AZUREFIREWALLS/C-ABS-IT-SS-PROD-UKS-FIREWALL", "operationName": "AzureFirewallNetworkRuleLog", "properties": {"msg":"TCP request from 10.119.252.16:64967 to 54.83.8.19:54443. Action: Deny"}},{ "category": "AzureFirewallNetworkRule", "time": "2021-04-26T13:13:37.4217670Z", "resourceId": "/SUBSCRIPTIONS/**********-*****-****-**/RESOURCEGROUPS/C-ABS-IT-SS-PROD-UKS-RG/PROVIDERS/MICROSOFT.NETWORK/AZUREFIREWALLS/C-ABS-IT-SS-PROD-UKS-FIREWALL", "operationName": "AzureFirewallNetworkRuleLog", "properties": {"msg":"TCP request from 10.119.34.12:62142 to 131.100.0.201:5938. Action: Deny"}},{ "category": "AzureFirewallNetworkRule", "time": "2021-04-26T13:13:37.9262290Z", "resourceId": "/SUBSCRIPTIONS/**********-*****-****-**/RESOURCEGROUPS/C-ABS-IT-SS-PROD-UKS-RG/PROVIDERS/MICROSOFT.NETWORK/AZUREFIREWALLS/C-ABS-IT-SS-PROD-UKS-FIREWALL", "operationName": "AzureFirewallNetworkRuleLog", "properties": {"msg":"TCP request from 10.119.252.196:13973 to 40.79.154.87:443. Action: Allow"}} Above is one single event from which i want to extract src ip and dest ip  for example 10.119.252.16 is src ip and 54.83.8.19 is dest ip , I want to extract all from backend i dont wana use  rex max_match=0 .   Please let me know how can i extract all from backend .   Thanks Kannu
Hello, new to Splunk and would appreciate some guidance.  I want to create a timechart query to use for a dashboard to display the average response time over 24h as a trend. This is what I have so f... See more...
Hello, new to Splunk and would appreciate some guidance.  I want to create a timechart query to use for a dashboard to display the average response time over 24h as a trend. This is what I have so far:    index= ... | stats min(_time) as min_t max(_time) as max_t by uniqueId | eval duration = (max_t - min_t)* 1000 | timechart span=24h avg(duration) as AvgDur   This returns an empty table as a response with no average duration values. If I return just before the timechart command, I get a table with all my log responses with the correct duration.  Could I get some guidance on why timechart isn't returning the average durations over 24h?    
We are monitoring the C: drive free space of our whole infrastructure, and would like to create a bar chart with color coding based on different thresholds. We have tried different things in XML, in ... See more...
We are monitoring the C: drive free space of our whole infrastructure, and would like to create a bar chart with color coding based on different thresholds. We have tried different things in XML, in the query itself but nothing seems to be working for us... for now our query and graph looks like this: |mstats latest(_value) as value WHERE metric_name=win:disk.Free_Megabytes AND index=myindex AND instance="C:" BY host | eval Free Space (GB)=round((value)/1024,2) |sort "Free Space (GB)" - asc | head 20 | table host "Free Space (GB)" We would like to have a RED-AMBER-GREEN color based on the following limits RED= 0-3 AMBER 3-8 GREEN > 8 I have tried rangemap, and creating a new field based on these same limits to try to filter them in the XML but to no avail:       Any help in figuring this out is more than welcome.   Thanks!
Hello, I've tried to get this app working for a small tenant but have been unsuccessful. I'm getting secret errors in the log files. Can anyone help?  I have confirmed that the secret is correct for ... See more...
Hello, I've tried to get this app working for a small tenant but have been unsuccessful. I'm getting secret errors in the log files. Can anyone help?  I have confirmed that the secret is correct for the application. 2021-04-25 23:26:01,469 ERROR pid=5900 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "F:\Splunk\etc\apps\TA_microsoft_o365_email_add_on_for_splunk\bin\ta_microsoft_o365_email_add_on_for_splunk\aob_py3\modinput_wrapper\base_modinput.py", line 128, in stream_events self.collect_events(ew) File "F:\Splunk\etc\apps\TA_microsoft_o365_email_add_on_for_splunk\bin\o365_email.py", line 132, in collect_events input_module.collect_events(self, ew) File "F:\Splunk\etc\apps\TA_microsoft_o365_email_add_on_for_splunk\bin\input_module_o365_email.py", line 185, in collect_events access_token = _get_access_token(helper) File "F:\Splunk\etc\apps\TA_microsoft_o365_email_add_on_for_splunk\bin\input_module_o365_email.py", line 64, in _get_access_token CURRENT_TOKEN = access_token[ACCESS_TOKEN] KeyError: 'access_token'
hi, i want to return a result as a filed  with list of raw   my data is: Product : A, SubProcut: A1, Status :1 Product : A, SubProcut: A2, Status :1 Product : A, SubProcut: A3, Status :2 Produc... See more...
hi, i want to return a result as a filed  with list of raw   my data is: Product : A, SubProcut: A1, Status :1 Product : A, SubProcut: A2, Status :1 Product : A, SubProcut: A3, Status :2 Product : B, SubProcut: B1, Status :1   I want to query that result will be: prodduct A, Status 2, subProducts: [A1,A2,A3], Status: 2 prodduct A, Status 2, subProducts: [B1], Status: 1
Hello all,  I do apologise as I am a new Splunker and needing some help with event breaking. Not sure the best approach as my raw data is unreadable.  What is the best method for parsing the log ... See more...
Hello all,  I do apologise as I am a new Splunker and needing some help with event breaking. Not sure the best approach as my raw data is unreadable.  What is the best method for parsing the log with field extractions + line/event breaking.  Here is an example of a log:     { "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "category": "kube-audit", "ccpNamespace": "5fc0e650f40a0500013bfedc", "resourceId": "/SUBSCRIPTIONS/dsadsa/RESOURCEGROUPS/P365-AUE-MGMT-DTA-FRONTEND-AKS-RG/PROVIDERS/MICROSOFT.CONTAINERSERVICE/MANAGEDCLUSTERS/P365-AUE-MGMT-DTA-FRONTEND-AKS", "properties": {"log":"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"f07ee314-89e5-4743-a515-05f18dfd1c32\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/nginx-activate-account-test/configmaps/ingress-controller-leader-nginx-activate-account-test\",\"verb\":\"update\",\"user\":{\"username\":\"system:serviceaccount:nginx-activate-account-test:nginx-activate-account-test-nginx-ingress\",\"uid\":\"7490dbfe-63ea-4c65-b79c-dc9975e1996a\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:nginx-activate-account-test\",\"system:authenticated\"]},\"sourceIPs\":[\"10.241.0.23\"],\"userAgent\":\"nginx-ingress-controller/v0.34.1 (linux/amd64) ingress-nginx/v20200715-ingress-nginx-2.11.0-8-gda5fa45e2\",\"objectRef\":{\"resource\":\"configmaps\",\"namespace\":\"nginx-activate-account-test\",\"name\":\"ingress-controller-leader-nginx-activate-account-test\",\"uid\":\"072c4bc7-a841-458e-af05-9b98e0d80724\",\"apiVersion\":\"v1\",\"resourceVersion\":\"77895625\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-04-25T22:07:43.638765Z\",\"stageTimestamp\":\"2021-04-25T22:07:43.641341Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by RoleBinding \\\"nginx-activate-account-test-nginx-ingress/nginx-activate-account-test\\\" of Role \\\"nginx-activate-account-test-nginx-ingress\\\" to ServiceAccount \\\"nginx-activate-account-test-nginx-ingress/nginx-activate-account-test\\\"\"}}\n","stream":"stdout","pod":"kube-apiserver-64bc7458dc-nhccb"}, "time": "2021-04-25T22:07:43.0000000Z", "Cloud": "AzureCloud", "Environment": "prod", "UnderlayClass": "hcp-underlay", "UnderlayName": "hcp-underlay-australiaeast-cx-36"}      
I've seen a few older posts on this, so I thought I might try and get a more recent answer.  There are situations in which you might want to run the Universal forwarder as root.  Sensibly new versi... See more...
I've seen a few older posts on this, so I thought I might try and get a more recent answer.  There are situations in which you might want to run the Universal forwarder as root.  Sensibly new versions of the uf remove the default password issue, but the existence of the Splunk management port running as root exposes the UF to the possibility of a remote and local  privilege escalation, whether through poor password management by an admin or some undiscovered authentication flaw in the UF itself.  Disabling the management port altogether should remove that vector entirely. There is a documented method in server.conf To save you looking it up here are the docs from 8.1.3 disableDefaultPort = <boolean> * If set to "true", turns off listening on the splunkd management port, which is 8089 by default. * NOTE: Changing this setting is not recommended. * This is the general communication path to splunkd. If it is disabled, there is no way to communicate with a running splunk instance. * This means many command line splunk invocations cannot function, Splunk Web cannot function, the REST interface cannot function, etc. * If you choose to disable the port anyway, understand that you are selecting reduced Splunk functionality. * Default: false In my testing enabling it on an UF seems OK. Some CLI commands fail, notably anything that needs to authenticate. My question is that other than most cli commands, will anything important to the UF break?
We have received an alert for splunk Forwarder not active on 1 host. We are not able go see the contributing events for this. Could you please help me with the query to search whether splunk Forward... See more...
We have received an alert for splunk Forwarder not active on 1 host. We are not able go see the contributing events for this. Could you please help me with the query to search whether splunk Forwarder is active or not?
Hi, I am new to splunk. I have a query to return the count of successes and failures I have a field http_status that can either be 200 (success) or anything other than 200 (failure)    | stats cou... See more...
Hi, I am new to splunk. I have a query to return the count of successes and failures I have a field http_status that can either be 200 (success) or anything other than 200 (failure)    | stats count by http_status currently returns: 200     111 400     214 401     1   I want that anything other than 200 be grouped together as failure: success    111 failure    215   Is there any way to do this?
Not able to find the stats details for all M.  tried fill null . it is not working  index=UA sourcetype=apps appname="xyz*" |fields EID |dedup EID |lookup employee.csv EID as EID |search MID in ... See more...
Not able to find the stats details for all M.  tried fill null . it is not working  index=UA sourcetype=apps appname="xyz*" |fields EID |dedup EID |lookup employee.csv EID as EID |search MID in (M1,M2,M3 M4,M5,M6) |stats count(EID) as total by MID getting below result. but missing  results for M4, M5, M6 MID               total  M1                 2 M2                 3 m3                 4 Expecting below results MID               total  M1                 2 M2                 3 M3                 4 M4                 0 M5                 0 M6                  0
Hello everyone, when trying to work with Analytics on my AppD on-premises environment, I get the following error.  The IP address of the server, where my Controller and Events-Service are h... See more...
Hello everyone, when trying to work with Analytics on my AppD on-premises environment, I get the following error.  The IP address of the server, where my Controller and Events-Service are hosted, has changed, that's why the above address http://ec2-3-120-31-140.eu-central-1.compute.amazonaws.com  is not valid. The problem is, that the Events Service still makes a reference to the old Controller address. When I open my platform-admin-server.log file, I get the following exception:  How can I change the Controller IP address referenced by my Events Service? Thanks in advance, Christian
@dilip7504 @renjith_nair  I am unable to solve the below problem on "tutorialsdata.zip" provided in documentation as there is no field named as "purchase".  Client purchase details: Total purchas... See more...
@dilip7504 @renjith_nair  I am unable to solve the below problem on "tutorialsdata.zip" provided in documentation as there is no field named as "purchase".  Client purchase details: Total purchase split by product ID Total Products split by product ID...
Dear All, All of the internal indexes of Splunk, (_audit, _internal, _introspection, _metrics, _telemetry, _thefishbucket and splunklogger) were disabled with  red lock icons. I have tried: 1) res... See more...
Dear All, All of the internal indexes of Splunk, (_audit, _internal, _introspection, _metrics, _telemetry, _thefishbucket and splunklogger) were disabled with  red lock icons. I have tried: 1) restart the splunkd; 2) followed the method in the following link (delete the entire _audit folder but no luck) URL: https://community.splunk.com/t5/Archive/audit-index-remains-disabled/m-p/98864 Please help me.  Thank you.
Hi Team Splunk,  Can someone tell me how I can represent a blank space in a title field?  (ie... I want to centre my title over my result which is centred to the panel, however my title is left just... See more...
Hi Team Splunk,  Can someone tell me how I can represent a blank space in a title field?  (ie... I want to centre my title over my result which is centred to the panel, however my title is left justified. When I use the below syntax the leading space characters are ignored. <title>                        Number of Events</title>
I'm trying to create a simple table from the following JSON data, and I only care about extracting three particular values: trap_recieved_ts, cctConfigChangeType, and cctDeviceLabel   {   "trap_de... See more...
I'm trying to create a simple table from the following JSON data, and I only care about extracting three particular values: trap_recieved_ts, cctConfigChangeType, and cctDeviceLabel   {   "trap_destination_ip": "1.2.3.4",   "trap_recieved_epoch": "1234567890",   "trap_recieved_ts": "2021-04-08 14:17:32",   "trap_source_ip": "1.2.3.4",   "traps": [     {       "DISMAN-EVENT-MIB::sysUpTimeInstance": "2:2:49:18.49",       "ETV-Agent-MIB::cctConfigChangeTrapSequenceNumber.17": "Wrong Type (should be Counter32): 17",       "ETV-Agent-MIB::cctConfigChangeType.17": "Switchover",       "ETV-Agent-MIB::cctDeviceLabel.17": "HOSTNAME",       "SNMP-COMMUNITY-MIB::snmpTrapAddress.0": "1.2.3.4",       "SNMP-COMMUNITY-MIB::snmpTrapCommunity.0": "public",       "SNMPv2-MIB::snmpTrapEnterprise.0": "ETV-Agent-MIB::cctConfigChangeTrapTable",       "SNMPv2-MIB::snmpTrapOID.0": "ETV-Agent-MIB::cctSingleConfigChangeTrap"     }   ] }   The first issue I'm running into is with the .17, which increments with every new data point. The dot forces Splunk to treat the 17 as a new object in the path, and the fact that it increments prevents be from statically defining the key in my search string. index=index | spath output=time path=trap_recieved_ts | spath output=alert path=traps.ETV-Agent-MIB::cctConfigChangeType.17 | spath output=device path=traps.ETV-Agent-MIB::cctDeviceLabel.17 | table time alert device   I've read that I should be able to do the following in order to identify the two problematic keys I'm interested in, but Splunk seems to just disregard the {}   index=index | spath output=time path=trap_recieved_ts | spath output=alert path=traps{2} | spath output=device path=traps{3} | table time alert device   Any suggestions?
Hi, my name is hamanako. I would like to use "Windows Event Code Security Analysis", but when I select the "Lookup OverView" or "Table Analysis" menu, I get the following error. Please let me know ... See more...
Hi, my name is hamanako. I would like to use "Windows Event Code Security Analysis", but when I select the "Lookup OverView" or "Table Analysis" menu, I get the following error. Please let me know how to solve this problem. Error message:   The app you requested is not available on "splunk_wineventcode_secanalysis".   The app you requested is not available on this system. Check the spelling of the app, or choose another from the following list: Environment:   OS:  Windows 2012   Splunk Enterprise 8.1.2 (Free)   Windows Event Code Security Analysis Version 1.3     file name: splunk_wineventcode_secanalysis https://github.com/stressboi/splunk_wineventcode_secanalysis    
Where do I find the "archivename" of my KVstore in each server for backing up purposes?
I am getting the following error while configuring Splunk with Azure Event Hub. 2021-04-23 10:12:17,141 level=WARNING pid=xxxxxxx tid=Thread-2 logger=azure.eventhub._eventprocessor.event_processor p... See more...
I am getting the following error while configuring Splunk with Azure Event Hub. 2021-04-23 10:12:17,141 level=WARNING pid=xxxxxxx tid=Thread-2 logger=azure.eventhub._eventprocessor.event_processor pos=event_processor.py:_load_balancing:281 | EventProcessor instance '2ea6353e-ee45-4a4e-b173-5f82ae79707c' of eventhub 'insights-activity-logs' consumer group '$Default'. An error occurred while load-balancing and claiming ownership. The exception is EventHubError("Unexpected response '{'error': 'invalid_client', 'error_description': 'AADSTS7000215: Invalid client secret is provided.\r\nTrace ID:xxxxxxx-c913-420f-8dfb-5169faed3800\r\nCorrelation ID: xxxxxxxx-81b2-4436-9d25-13e38ec15d9d\r\nTimestamp: 2021-04-23 02:12:10Z', 'error_codes': [7000215], 'timestamp': '2021-04-23 02:12:10Z', 'trace_id': 'xxxxxxxxx-c913-420f-8dfb-5169faed3800', 'correlation_id': 'xxxxxxxx-81b2-4436-9d25-13e38ec15d9d', 'error_uri': 'https://login.microsoftonline.com/error?code=7000215'}'\nUnexpected response '{'error': 'invalid_client', 'error_description': 'AADSTS7000215: Invalid client secret is provided.\r\nTrace ID: xxxxxxx-c913-420f-8dfb-5169faed3800\r\nCorrelation ID: xxxxxxx-81b2-4436-9d25-13e38ec15d9d\r\nTimestamp: 2021-04-23 02:12:10Z', 'error_codes': [7000215], 'timestamp': '2021-04-23 02:12:10Z', 'trace_id': 'xxxxxxxxx-c913-420f-8dfb-5169faed3800', 'correlation_id': 'xxxxxxxxxx-81b2-4436-9d25-13e38ec15d9d', 'error_uri': 'https://login.microsoftonline.com/error?code=7000215'}'"). Retrying after 10.408012031827356 seconds   I am referring to the following tutorials: https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-microsoft-azure-monitor-data-part-1-azure-setup.html https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-microsoft-azure-monitor-data-part-2-splunk-setup.html From my understanding, it is that we will have to generate a Azure AD application and set its permission for resource management and here, I am making use of it to enable Splunk to access the activity logs to my Event Hub. I have done setting up an AD application and added the role assignment to the AD application, after that, generated a client secret as mentioned in the tutorial. I am subscribing to Azure for Student, will this be the cause of getting this error as I have limited privileges?