All Topics

Top

All Topics

After creating a new LDAP strategy and entering all required information I get an error when saving. Entry not saved, the following error was reported: Syntax Error: Unexpected token< in JSON at pos... See more...
After creating a new LDAP strategy and entering all required information I get an error when saving. Entry not saved, the following error was reported: Syntax Error: Unexpected token< in JSON at position 5    I have verified all entries are correct multiple times.
Hello everyone ,  I have the below query which is fetching data for a particular index but i also want few fields from a look up file say ABC.csv and columns are 'Salary' and 'Date' from that . I am... See more...
Hello everyone ,  I have the below query which is fetching data for a particular index but i also want few fields from a look up file say ABC.csv and columns are 'Salary' and 'Date' from that . I am trying to fetch it but the data is coming as blank . Please help : index=*infra* metric_label ="Host : Reporting no data"  | bin span=6m@m metric_value as 6_min_data | stats      count(eval(metric_value=0)) as uptime      count(eval(metric_value=1)) as downtime      by 6_min_data, source_host | eval total_uptime = uptime*360 | eval total_dowtime = downtime*360 | eval total_uptime = if(isnull(total_uptime),0,total_uptime) | eval total_downtime = if(isnull(total_dowtime),0, total_dowtime) | eval avg_uptime_perc =  round((total_uptime/(total_uptime+total_downtime))*100 ,2) | eval avg_downtim_perc =  round((total_downtime/(total_uptime+total_downtime))*100,2) | eval total_uptime = tostring(total_uptime, "duration") | eval total_downtime = tostring(total_downtime, "duration") | rename "total_uptime" as "Total Uptime", "total_downtime" as "Total Downtime", avg_uptime_perc as "Average uptime in %", avg_downtim_perc as "Average Downtime in %" source_host as "Source Host" | table  "Source Host"  "Total Uptime" "Total Downtime" "Average uptime in %" "Average Downtime in %"
Hi All, I have two queries which searches for users that use an app. The apps are not in the same fields which was why I had to split the queries. But now I want to join the queries to get the res... See more...
Hi All, I have two queries which searches for users that use an app. The apps are not in the same fields which was why I had to split the queries. But now I want to join the queries to get the results Query 1 index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" | table user, url_domain, date_month | stats count by user url_domain date_month  | chart count by url_domain date_month  | sort url_domain 0 Query 2 index=db_it_network sourcetype=pan*  app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | table user, app, date_month | stats count by user app date_month  | chart count by app date_month  | sort app 0 results example that I want App August July claude-base 123 120 google-gemini 124 42 openai 153 123 bing-ai-base 212 232 www.perplexity.com 14 12
Hello, I have successfully integrated Cloudflare with Splunk Enterprise using the pull method. This integration was set up on a Heavy Forwarder, so the logs are first received by the HF before being... See more...
Hello, I have successfully integrated Cloudflare with Splunk Enterprise using the pull method. This integration was set up on a Heavy Forwarder, so the logs are first received by the HF before being forwarded to the Indexers. While the integration itself is working correctly, I encountered an issue with the time zone in the logs. The API we are using requires the timestamps to be in UTC. As a result, when the API fetches the logs, the events are recorded in the UTC timezone. However, I need to convert these timestamps from UTC to UTC+5 (Pakistan Standard Time, PKT). Here is a sample log event from Cloudflare:   " --- EdgeEndTimestamp: 2024-08-26T09:07:43Z EdgeResponseBytes: 72322 EdgeResponseStatus: 206 EdgeStartTimestamp: 2024-08-26T09:07:43Z --- " We are extracting the EdgeStartTimestamp and using it for the _time field, but this timestamp is in UTC format. In my props.conf file on the Heavy Forwarder, I have the following configuration: [cloudflare:json] disabled = false TIME_PREFIX = \"EdgeStartTimestamp\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19   I also tried adding the TZ setting to props.conf: [cloudflare:json] TZ = Asia/Karachi However, this didn't work because the events themselves contain timezone information (UTC), so the TZ setting doesn't have any effect. I then tried using TZ_ALIAS in props.conf: [cloudflare:json] TZ_ALIAS = Z=UTC+5 This didn't work either. Finally, I tried the following in props.conf, but it still didn't resolve the issue: [cloudflare:json] EVAL-_time = _time + 5*3600   Any help would be appreciated.
Hi, I am currently learning Splunk and trying to set up for myself on my local machine. I am looking at the Splunk BOTS v2 guide and can see there are a number of apps to be added. There is one ap... See more...
Hi, I am currently learning Splunk and trying to set up for myself on my local machine. I am looking at the Splunk BOTS v2 guide and can see there are a number of apps to be added. There is one app which I am unsure how to download and add to the web gui as there are no links to download. App: Collectd App for Splunk Enterprise https://splunkbase.splunk.com/app/2875/ Upon visiting the site (to GitHub), I am presented with some instructions to configure things which is a little confusing for new starters, but also not able to see the app download link. Am I missing something here or it's just no longer relevant for V2? I am not using any forwarders, indexers etc, just one host to try set this up.   Thanks.
Here I have 2 event statistics (id=superman & id=batman) in Json format. How do I arrange it in a table format based on the id and enemy information values? This is slightly different to previous que... See more...
Here I have 2 event statistics (id=superman & id=batman) in Json format. How do I arrange it in a table format based on the id and enemy information values? This is slightly different to previous questions as for the "enemies"-key values, the "enemy_information" has the value headers while "enemy_information_values" contains a list of values matching the  "enemy_information" value headers.  For example, I want the result to look something like the table below. I know the data needs pre-processing, but I wanted to know if it was possible to do so via SPL commands. The reasoning to avoid pre-processing is I already have previous data with the same format that are too far back for me to re-ingest.  [{     "id": "Superman",     "birthName": "Kal-El",     "origin": "Krypton",     "enemies": [         {             "enemy_information": [                 "name",                 "location",                 "powers"             ],             "enemy_information_values": [                 [                     "Doomsday",                     "Kryptonian Prison",                     [                         "Super Strength",                         "Invulnerability",                         "Regeneration",                         "Adaptation",                         "Enhanced Durability",                         "Immunity to Kryptonite"                     ]                 ],                 [                     "Lex Luthor",                     "Metropolis",                     [                         "Genius-level Intellect",                         "Skilled Strategist",                         "Advanced Technology and Weaponry",                         "Political Influence",                         "Expert in Kryptonite"                     ]                 ]                             ]         }     ] }, {     "id": "Batman",     "birthName": "Bruce Wayne",     "origin": "Gotham City",     "enemies": [         {             "enemy_information": [                 "name",                 "location",                 "powers"             ],             "enemy_information_values": [                 [                     "Joker",                     "Gotham City",                     [                         "Genius-level Intellect",                         "Master of Psychological Manipulation",                         "Skilled Hand-to-Hand Combatant",                         "Expert in Criminal Psychology",                         "Master of Disguise"                     ]                 ],                 [                     "Two-Face",                     "Gotham City",                     [                         "Expert Marksman",                         "Skilled Hand-to-Hand Combatant",                         "Access to Advanced Weaponry",                         "Strategic Mind",                         "Psychological Trauma"                     ]                 ]             ]         }     ] }]    
HI, I have a customer using splunk for just syslog.  There has recently been a ddos attack, we are looking to report on how much traffic came from the known ddos hosts. In the syslog the router has... See more...
HI, I have a customer using splunk for just syslog.  There has recently been a ddos attack, we are looking to report on how much traffic came from the known ddos hosts. In the syslog the router has flagged the known IP's as > msg="torproject.org:Anonymizers, SSI:N" note="ACCESS BLOCK" We can search for this fine, however there is a preceding entry for the sending IP address that is in the syslog where the router has forwarded this from firewall to its ip address check phase.  We are looking to get total rows of all traffic from ddos hosts So we search for "torproject" we then want to search again for all ip's that appeared in that first search. Then extract from that search every "src="103.76.173.203:7627" then search for all those Any ideas please? End goal = how much traffic was from ddos hosts and how much wasnt (as a rough %) Thanks in advance
Hello, I need to collect logs from a firewall Stormshield. Do you have any suggestions on how to gather these logs, or is there a specific add-on available for this purpose? Thank you in advance.
Hi everyone i want to ask where can i get latest update for legit_domains.csv ? Ask here because when i check it in lookup it says no owner, so i think it created automatically from Splunk. I k... See more...
Hi everyone i want to ask where can i get latest update for legit_domains.csv ? Ask here because when i check it in lookup it says no owner, so i think it created automatically from Splunk. I know it can be update it manually, but it takes time again. it will helpfull when you can give me latest update for this .csv   
we have created new app registation as per the document and assigned correct permistion as per the docuement.  still not able to pull the logs.  splunk support portal is down for 5 days.  need urgent... See more...
we have created new app registation as per the document and assigned correct permistion as per the docuement.  still not able to pull the logs.  splunk support portal is down for 5 days.  need urgent spport.  invalid_client","error_description":"AADSTS7000216: 'client_assertion', 'client_secret' or 'request' is required for the 'client_credentials' grant type. Trace ID
Hi i still get an error regarding GlusterFS during another fresh install of the latest Splunk SOAR even i already update the wording from mirror to vault in install_common.py and already do an apply... See more...
Hi i still get an error regarding GlusterFS during another fresh install of the latest Splunk SOAR even i already update the wording from mirror to vault in install_common.py and already do an apply to save the changes.     any idea where should i update again for those links that tooks an error? i do already installed manually the packages, but it still check into that links. Please help
When I try to login to splunk it give me authentication options. Once user pass is provided. it gives me below error.   Also when i checked web_service.log I see below error 'Error connect... See more...
When I try to login to splunk it give me authentication options. Once user pass is provided. it gives me below error.   Also when i checked web_service.log I see below error 'Error connecting to /services/authentication/users/splunk: timed out',)
Hi Team, Hope this message finds you well. I have a new splunk on-premise instance and we are planning to implement Splunk Trackme app on our SHC to monitor any data latency, missing data etc. for ... See more...
Hi Team, Hope this message finds you well. I have a new splunk on-premise instance and we are planning to implement Splunk Trackme app on our SHC to monitor any data latency, missing data etc. for our instance.  I read through few docs (https://trackme.readthedocs.io/en/latest/deployment.html)  that says it is resource consuming. I want to understand if it will impact our license consumption apart from CPU and memory post deployment. Also do we need any separate license for the splunk track-me. What are cons of it. Pls reply soon, thanks in advance
I have a json-File with with 23.904 objects in it. They are all like: { "1.Entry": "1.Data", ... "44.Entry": "44.Data" }, ... 23.902 similiar entries... { "1.Entry": "1.Data", ... "4... See more...
I have a json-File with with 23.904 objects in it. They are all like: { "1.Entry": "1.Data", ... "44.Entry": "44.Data" }, ... 23.902 similiar entries... { "1.Entry": "1.Data", ... "44.Entry": "44.Data" } But forwarding the json-file leaded to the count of 22.256 events (presents 22.256 json-objects) My props.conf [json_test] DATETIME_CONFIG = TIMESTAMP_FIELDS = test.sys_created_on INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = test json disabled = false pulldown_type = true   The problem so is not that a single event is truncated, but the json-file is.  
Hi,  I'm trying to instrument my .NET application for Splunk Observability Cloud. I'm using this package for that and it's working. I can see traces coming in. However in the Database Query Performa... See more...
Hi,  I'm trying to instrument my .NET application for Splunk Observability Cloud. I'm using this package for that and it's working. I can see traces coming in. However in the Database Query Performance section, I can only see the queries executed by hangfire (which we use to manage background jobs) in the application. Other DB queries are not captured. We are using a PostgreSQL database hosted in Amazon RDS which is compatible. The SQL Database MetricSets is also active. How can I make sure all the DB queries are captured? .
how do i determine when to use index=botsv1 ?  
Hi Folks, I was working on Splunk  webhook however I'm getting below error while sending payload though Webhook also the webhook url has been allowed aleardy. action=webhook STDERR - Error sending ... See more...
Hi Folks, I was working on Splunk  webhook however I'm getting below error while sending payload though Webhook also the webhook url has been allowed aleardy. action=webhook STDERR - Error sending webhook request: <urlopen error [Errno -2] Name or service not known> Does anyone have any ideas on how to resolve this issue?  
I'm very new to Splunk.  I have two tokens as input to a dashboard and want to change a query based on which one is entered.   My base query (with no dashboard)  eventtype=builder user_id IN (<v... See more...
I'm very new to Splunk.  I have two tokens as input to a dashboard and want to change a query based on which one is entered.   My base query (with no dashboard)  eventtype=builder user_id IN (<value1>, <value2>, etc.) | eval ..... I created a dashboard and want to use tokens for the input.   token1=$id$ token2=$email$ If the token1 has data, I want to execute eventtype=builder user_id IN ($id$) | eval....  otherwise, I want to execute  eventtype=builder user_mail in $email$ | eval .....  
requirements: find and save sensitive data fields from logs Save log snippet around sensitive data field Remove duplicates for mule apps and sensitive data field Create table showing mule app nam... See more...
requirements: find and save sensitive data fields from logs Save log snippet around sensitive data field Remove duplicates for mule apps and sensitive data field Create table showing mule app name, sensitive data, and log snippet is there a way to improve the search query so I don't have to duplicate the rex commands every time I need to add a new sensitive data value? (app_name is an existing custom field) index="prod"  |rex field=_raw (?i)(?<birthDate>(birthDate))|rex field=_raw (?i)(?<dob>(dob)) |rex field=_raw (?i)(?<birthday>(birthday)) |rex field=_raw (?i)(?<birthDateLog>(birthDate).*?\w\W) |rex field=_raw (?i)(?<dobLog>(dob).*?\w\W) |rex field=_raw (?i)(?<birthdayLog>(birthday).*?\w\W)|eval SENSITIVE_DATA= mvappend(birthDate,dob,birthday) |eval SENSITIVE_DATA_LOWER=lower(SENSITIVE_DATA) | dedup app_name SENSITIVE_DATA_LOWER |eval SENSITIVE_DATA_LOG=mvappend(birthDateLog,dobLog,birthdayLog) |stats list(SENSITIVE_DATA_LOG) as SENSITIVE_DATA_LOG list(SENSITIVE_DATA_LOWER) as SENSITIVE_DATA_LOWER by app_name | table app_name SENSITIVE_DATA_LOWER SENSITIVE_DATA_LOG   example output: app_name SENSITIVE_DATA_LOWER SENSITIVE_DATA_LOG s-api dob birthdate dob: 01/01/2024 birthdate:  09-09-1999 p-api birthday birthday: August 23, 2024
We have json logs, from the below logs we need to get the rex for the failures count which is mentioned in the logs like (7 failures) We need rex to get the count for failures  count. {"attributes"... See more...
We have json logs, from the below logs we need to get the rex for the failures count which is mentioned in the logs like (7 failures) We need rex to get the count for failures  count. {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 3 batches with 3 failures.3", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 4 batches with 4 failures.4", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 5 batches with 5 failures.5", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 7 batches with 7 failures.7", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null} {"attributes": {"type" : "rar_Log__c", "url": "/data/log/v4.0/subject/rar"}, "Application_Id__c": "MOT-Branch", "Category__c": "MOT-Branch", "Comment__c": "Class Name: MOT_Date3DayPurgeBatch - LCT Declined or Not Funded applications deletion completed 10 batches with 10 failures.10", "Contact_Id__c": null, "CreatedById" : 657856MHQA, "CreatedDate": "2022-02-21T16:04:01.000+0000", "Description__c": null}