All Topics

Top

All Topics

Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DAT... See more...
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = (?:,)([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = _time TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 category = Structured description = my json type without truncate disabled = false pulldown_type = 1 MAX_EVENTS = 2500 BREAK_ONLY_BEFORE_DATE = true   The data has about 5000 Lines, sample is the below: { "Versions" : { "sample_version" : "version.json", "name" : "my_json", "revision" : "rev2.0"}, "Domains" : [{ "reset_domain_name" : "RESET_DOMAIN", "domain_number" : 2, "data_fields" : ["Namespaces/data1", "Namespaces/data2"] } ], "log" : ["1 ERROR No such directory and file", "2 ERROR No such directory and file", "3 ERROR No such directory and file", "4 ERROR No such directory and file" ], "address" : [{ "index": 1, "addr": "0xFFFFFF"} ], "fail_reason" : [{ "reason" : "SystemError", "count" : 5}, { "reason" : "RuntimeError", "count" : 0}, { "reason" : "ValueError", "count" : 1} ], ... blahblah ... "comment" : "None"} How to fix this warning log? We add "MAX_EVENTS" field in props.conf, but it does not working.
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid... See more...
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid ? Here's what the videos say we should get: But here's what the query returns: It groups by date successfully, but doesn't yield results by product. Both of the online dashboard creation videos in the url below yield the desired results shown in the first screenshot above.   Note:  the source="tutorialdata.zip:*". Two video training sites are here: https://www.splunk.com/en_us/training/videos/all-videos.html https://www.splunk.com/en_us/blog/learn/splunk-tutorials.html#education
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL... See more...
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL, then I want to be alerted when a particular {customerId} has a high number of errors within a specific time period. If there's a higher number of errors but they're mostly for different {customerId} values, then I don't want a notification. Thanks.
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "D... See more...
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "DataCenters" : [ { "Geography" : "USA", "Region" : "East", "City": "New York" }, { "Geography" : "India", "Region" : "West", "City": "Ahmedabad" }, { "Geography" : "USA", "Region" : "West", "City": "San Fransisco" }, { "Geography" : "South Africa", "Region" : "West", "City": "Capetown" } ] }       Can somebody please help me fetch this information. Thanks.
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events v... See more...
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events visible in EntraID versus what is visible from Splunk. There are always slightly more events in EntraID. The gap seems to worsen the higher the volume of events becomes. See this table: Time Splunk Entra ID Difference 1st hour 3265 3305 40 2nd hour 3085 4804 1719 3rd hour 3264 6309 3045 4th hour 2274 3841 1567 5th hour 1659 2632 973 6th hour 2168 3442 1274 7th hour 6236 8923 2687 8th hour 22716 35901 13185 9th hour 63186 101602 38416 10th hour 88607 145503 56896 11th hour 68407 140095 71688 12th hour 76866 124423 47557 13th hour 68717 122355 53638 14th hour 81310 144880 63570 15th hour 50849 140876 90027 16th hour 42972 124040 81068 17th hour 33693 91792 58099 18th hour 13683 50408 36725 19th hour 13973 38695 24722 20th hour 12182 29645 17463 21st hour 9734 24187 14453 22nd hour 8037 16935 8898 23rd hour 5869 11994 6125 24th hour 5631 8837 3206 Total 688383 1385424 697041 Percentage difference     50.31%   - This gap appears even when searching historical logs i.e. time slots over the last two weeks. - The retention period of the index is 90 days, so the events should not have expired yet. - There are no line breaking, event breaking, aggregation, timestamp, or other parsing errors for the sourcetype. - The gap is still present when searching over all time. - The internal logs from the Splunk Add on for Microsoft Azure only show the following two error messages which don't seem relevant, and only appeared a few times over the last month or so: "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/binding.py", line 1337, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- KV Store is in maintenance mode." "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/modularinput/event.py", line 111, in write_to stream.flush() BrokenPipeError: [Errno 32] Broken pipe" I have updated the polling interval of the Microsoft Entra ID Interactive Sign-ins input to 900 seconds, but still the issue persists. What other explanation could there be for the gap?   Thanks, K
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the re... See more...
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the reports? The issue we face is  we are able to set the reports in the studio dashboard, but the default is that they are stuck as static reports. how can we add in a time range input that will work with the dashboard and the reports?
Hi,    I have index called Index1 which has sourcetype called SourceType1 and another index called Index2 with sourceType called SourceType2. Some data is in combination of Index1<-> SourceType1 an... See more...
Hi,    I have index called Index1 which has sourcetype called SourceType1 and another index called Index2 with sourceType called SourceType2. Some data is in combination of Index1<-> SourceType1 and some data is in combination of Index2<->SourceType2.   How can I write a query that targets the correct index and sourceType?                                                  
I have a cyber security finding that states "The splunk service accepts connections encrypted using SSL 2.0 and/or SSL 3.0".  Of course SSL 2.0 and 3.0 are not secure protocols.  How do I disable SSL... See more...
I have a cyber security finding that states "The splunk service accepts connections encrypted using SSL 2.0 and/or SSL 3.0".  Of course SSL 2.0 and 3.0 are not secure protocols.  How do I disable SSL 2.0/3.0?  Can I just disable it in the browser or do I need to change a setting within splunk?
Need some assistance with creating a query where I am trying to capture the parent folder and the 1st child folder respectively from a print output log that has both windows and linux folder paths.  ... See more...
Need some assistance with creating a query where I am trying to capture the parent folder and the 1st child folder respectively from a print output log that has both windows and linux folder paths.  Sample data and folder paths I am trying to get in a capture group is in bold. _time,     username,      computer,      printer,      source_dir,      status 2024-09-24 15:32 ,   auser, cmp_auser,  print01_main1,   \\cpn-fs.local\data\program\...,          Printed 2024-09-24 13:57 ,   buser, cmp_buser,  print01_offic1,   c:\program files\documents\...,            Printed 2024-09-24 12:13 ,   cuser, cmp_cuser,  print01_offic2,   \\cpn-fs.local\data\transfer\...,            In queue 2024-09-24 09:26,    buser, cmp_buser,  print01_offic1,   F:\transfers\program\...,                           Printed 2024-09-24 09:26,    buser, cmp_buser,  print01_front1,   \\cpn-fs.local\transfer\program\...,  Printed 2024-09-24 07:19,    auser, cmp_auser,   print01_main1,   \\cpn-fs.local\data\program\....,         In queue I am currently using a Splunk query where I call these folders in my initial search, but I want to control this using a rex command so I can add an eval command to see if they were printed locally or from a server folder.  Current query is: index=printLog  source_dir IN ("\\\\cpn-fs.local\data\*", "\\\\cpn-fs.local\transfer\*",  "c:\\program files\\*", " F:\\transfer\\*" )  status== "Printed" | table status, _time, username, computer, printer, source_dir I tried using the following rex but didn't get any return:      | rex field=source_dir "(?i)<FolderPath>(?i[A-Z][a-z]\:|\\\\{1})[^\\\\]+)\\\\[^\\\\]+\\\\)" In my second effort, through Splunk I generated these two regex using the field extractor respectively.  I know I need to pipe them to add the "OR" operator when comparing the windows and Linux paths but I get an error when trying to combine them. Regex generated from windows:  c:\program files  ^[^ \n]* \w+,,,(?P<FolderPath>\w+:\\\w+) Regex generated from linux: \\cpn-fs.local\data ^[^ \n]* \w+,,,(?P<FolderPath>\\\\\w+\-\w+\d+\.\w+\.\w+\\\w+) To start, I am looking for an output which should look like what is seen below to replace the "source_dir" with the rex "FolderPath"  created _time,     username,      computer,      printer,      FolderPath,      file,    status 2024-09-24 15:32 ,   auser, cmp_auser,  print01_main1,   \\cpn-fs.local\data\,    Printed 2024-09-24 13:57 ,   buser, cmp_buser,  print01_offic1,   c:\program files\,            Printed Thanks for any help given.
If you’re beyond the first-weeks-of-a-startup stage, chances are your application’s architecture is pretty complex. Especially in CI/CD pipelines, things change on a near daily basis, so what existed... See more...
If you’re beyond the first-weeks-of-a-startup stage, chances are your application’s architecture is pretty complex. Especially in CI/CD pipelines, things change on a near daily basis, so what existed in your codebase last week might not this week. In such fast-moving, large-scale environments, it would be impossible for development teams to manually keep track of and hold in their heads every line of code, every change, every new piece of functionality. So why would we expect the same for every piece of infrastructure or observability configuration?  Terraform is a popular Infrastructure as Code (IaC) tool created by Hashicorp that solves for this complexity challenge by getting the manual pieces out of our heads and into automation pipelines and version control. In this post, we’ll start with Terraform basics. We’ll walk through the benefits of using Terraform in general, then move into the “whys” behind incorporating Terraform into observability. Finally, we’ll take a look at how to implement the Splunk Terraform provider to get you up and running with Observability as Code.  Terraform: what is it & why use it? Rather than going into UIs or manually interacting with CLIs to deploy infrastructure like AWS EC2 instances, Kubernetes clusters, or even entire short-lived test environments, Terraform is a software tool that enables the provisioning and managing of infrastructure via code. This allows development teams to treat their infrastructure (and observability for infrastructure and applications) like any other piece of code they create, making it easy to maintain, update, and deploy additions or changes. Resources are defined in human-readable Terraform (*.tf) files that declaratively describe the desired infrastructure end-state. Plugins, known as providers, interact with cloud platforms and services to define pieces of infrastructure. There are currently over 1,000 providers in the Terraform Registry for managing resources on AWS, Azure, GCP, Kubernetes, Splunk, and many other platforms.  So why might you want to use Terraform? As we said, Terraform takes the tedious, manual, error-prone provisioning out of the infrastructure management process. Instead, with Terraform, teams can build, change, and manage infrastructure in a version-controlled, shared, and repeatable way all from a centralized location – a Terraform file. This file can be committed and stored alongside existing code in GitHub, GitLab, etc. for safe and collaborative infrastructure management. Additionally, multiple cloud platforms can be managed from within the same Terraform file, eliminating the need to manually move between different configuration interfaces. Terraform’s human-readable file structure makes it quick and easy to define and update infrastructure, and the state file that’s created with every deployment keeps track of existing infrastructure and changes so developers don’t have to.  It might sound like the process of using Terraform would be complicated, but it’s pretty straightforward. After installing Terraform, all you need to get started is a main.tf file and a few quick commands to initialize your configuration, view your infrastructure plan, and apply or deploy the defined infrastructure. Terraform & Observability How does observability relate to Terraform or Infrastructure as Code? Like infrastructure, observability configurations can be managed as code using Terraform. Rather than managing resources like dashboards, charts, detectors, or cloud platform integrations via observability platform UIs, they can be configured and deployed with Terraform. What does this bring to a team’s observability practice?  Standardization & Consistency Terraform managed resources can be packaged into modules making them reusable and easy to deploy. This also means that resources for different services and different environments can share configurations, making observing the end result in observability platforms more consistent. Consistency for resources like dashboards, charts, or alerts makes it easier to interpret data across services, products, or environments. During an incident, knowing where to find key metrics or being able to compare them across environments means faster insight into root cause and decreased time to incident resolution. True, this consistency could be accomplished by manually creating the same dashboards and charts across services and service environments, but this is tedious and leaves room for errors and configuration drift.  Collaboration & Improved Maintenance Managing observability configurations in code means version-controlled audit logs or documentation of changes to provide context around why resources were created or updated. Unlike with manual configurations where one person can go into the UI and adjust a setting, followed by another person subsequently reverting that setting, or even two people adjusting the same configuration at once, version-controlled configurations must be checked in and approved through the merge request process just like every other code change. Additionally, if updates don’t work out as expected, there’s a simple rollback process to revert changes.  Increased Speed & Scalability With Terraform, observability configurations are stored in one place – alongside the codebases they monitor. This means no scrambling to find one of hundreds of alerts in a UI to adjust its thresholds – instead, simply go into the codebase to tweak and redeploy. Changes can also be applied across multiple resources and multiple environments quickly, taking away the need to scroll through all resources, make adjustments, and then rinse and repeat for each additional service or environment. It might seem over-kill to create dashboards and charts via Terraform; however, in a world of microservices, where each microservice exists in multiple environments (e.g. dev, staging, production), deploying observability resources via code not only speeds up development, it speeds up issue detection and resolution.  Improved Security Rather than provisioning observability platform credentials to multiple team members, observability as code keeps things like API keys in one place – with the code – so they can be managed alongside other code secrets for improved security.  Splunk Terraform Provider Let’s see how easy it is to create observability resources using the Splunk Observability Cloud Terraform provider.  Note: If you don’t already have Terraform installed, start there. Terraform documentation is super thorough and includes a ton of great tutorials.  We’ve created a new main.tf file in our application’s root directory, but if you’re already using Terraform, you can add the same configuration additions to your existing Terraform file. Here’s an example of a Splunk Terraform provider configuration:  The first section is our required_providers block and declares the provider dependencies that we’ll be using, in our case the Splunk Observability Cloud aka signalfx provider. Think of the required_providers block like an import statement, while the following provider-specific block configures the signalfx provider.  Note: since our Splunk organization is located in us1 and not in the default of us0, we needed to specify the api_url.  We then define resource blocks. Each resource block describes the observability objects we want to create or update – dashboards, detectors, charts, etc. All available configurable resources can be found in the Splunk Observability Cloud provider docs under the Resources section. In our configuration, we’ve defined two resource blocks: a dashboard group, and a dashboard.  Now from this root directory, we can run the terraform init command to initialize the new Terraform infrastructure and install the Splunk Observability Cloud provider:  We can next run terraform plan to see the actions that will be taken once we apply our configuration: Two resources will be created, a dashboard group and a dashboard like we expect, so we can next run terraform apply to create these two resources:  It looks like our resources were successfully created, and we can head over to the Splunk Observability Cloud UI to confirm.  Going into our dashboards in Splunk Observability Cloud, we can see our newly created dashboard group and dashboard:  But this isn’t super useful yet because there’s no real information here that can help us on our observability journey. But that’s ok, to add new resources or update existing ones we can simply edit our main.tf file. We’ve added a chart resource to our Terraform file and updated our current dashboard to include the new chart:  We can then run terraform plan to again view the plan for the updated actions: That looks good, so we can apply the changes by running terraform apply and view our newly Terraform-created chart within our Terraform-created dashboard and dashboard group:  Wrap Up Whether you’re building out detectors and alerts, dashboards and charts, or integrating cloud services via Terraform, managing observability configurations as code can improve resource consistency, collaboration, and maintenance while increasing development and incident resolution speed. If you’re new to Terraform, check out some of the additional resources below. If you’re already using Terraform, try out observability as code and add some Splunk Observability Cloud resources to your Terraform configuration. Need Splunk Observability Cloud? We have a free 14-day trial! Resources Introduction to The Splunk Terraform Provider Infrastructure and Observability as Code Using the Splunk Observability Cloud Terraform Provider Managing Splunk Observability Cloud Teams As Code Terraform Documentation Splunk Observability Cloud provider
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_ti... See more...
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_time_Amber Resp_time_Red 5xx_Green 5xx_Amber 5xx_Red 4xx_Green 4xx_Amber 4xx_Red 0 200 400 0 50 100 0 50 100
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAc... See more...
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_2","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_3","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"}]  just for eg: I have added 3 entries but In real we have more than 200 records in the single event in this field When im using spath to extract this data its giving blank results, the same data when tested with fewer records (<10) its able to extract all the key value pairs, is there a better way to extract from large event data ?? Please help me with the SPL query.Thanks  @yuanliu @gcusello 
Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing... See more...
Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing is compared against a "?". Splunk DB Connect fills in the "?" with the check point value. Occasionally we will get "ORA-01843: Not a Valid Month errors" on inputs. The error itself is understood.* The question is, how do we rewrite the query to avoid this, when Splunk/DB Connect is handling how the "?" in the query  is replaced? Here is an example query: SELECT ACTION_NAME, CAST((EVENT_TIMESTAMP at TIME zone America/New_York) AS TIMESTAMP) extended_timestamp_est FROM AUDSYS.UNIFIED_AUDIT_TRAIL WHERE event_timestamp > ? ORDER BY EVENT_TIMESTAMP asc; How can we format the timestamp in the "?" in a way that the database understands and meets the DB Connect rising input requirement? Thank you! *(Our understanding is that it means that the timestamp/time format in the query is not understood by the database. The fact that it happens only occasionally means there is probably some offending row within the results set.)
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEve... See more...
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2 Now, I am trying to create the table of the following format index1Id prevEventId prevEventOrigin currEventId currEventOrigin   I tried the joins with the below query, but I see that the columns 3 and 5 are mostly blank. So, I am not sure what is wrong with the query.       index="index_1" | join type=left currEventId [ search index="index_2" | rename eventId as currEventId, eventOrigin as currEventOrigin | fields currEventId, currEventOrigin] | join type=left prevEventId [ search index="index_2" | rename eventId as prevEventId, eventOrigin as prevEventOrigin | fields prevEventId, prevEventOrigin] | table index1Id, prevEventOrigin, currEventOrigin, prevEventId, currEventId         And based on the online suggestions, I am trying the following approach, but couldn't complete it (works fine by populating all the columns)       (index="index_1") OR (index="index_2") | eval joiner=if(index="index_1", prevEventId, eventId) | stats values(*) as * by joiner | where prevEventId=eventId | rename eventOrigin AS previousEventOrigin, eventId as previousEventId | table index1Id, previousEventId, previousEventOrigin     Please let me know an efficient way to achieve the solution. Thanks   
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account... See more...
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account used to pull in the logs requires : admin:enterprise Full control of enterprises manage_billing:enterprise Read and write enterprise billing data read:enterprise Read enterprise profile data Can we reduce the amount of high privileged permissions required for the integration ?
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "activ... See more...
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "active":17519, "total":17519, "unique":4208, "total_prepared":16684, "unique_prepared":3703, "created":594, "updated":0, "deleted":0,"ports":[ {"stock_id":49, "goods_in":0, "picks":2, "inspection_or_adhoc":0, "waste_time":1, "wait_bin":214, "wait_user":66, "stock_open_seconds":281, "stock_closed_seconds":19, "bins_above":0, "completed":[43757746,43756193], "content_codes":[], "category_codes":[{"category_code":4,"count":2}]}, {"stock_id":46, "goods_in":0, "picks":1, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":2, "wait_user":298, "stock_open_seconds":300, "stock_closed_seconds":0, "bins_above":0, "completed":[43769715], "content_codes":[], "category_codes":[{"category_code":4,"count":1}]}, {"stock_id":1, "goods_in":0, "picks":3, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":191, "wait_user":40, "stock_open_seconds":231, "stock_closed_seconds":69, "bins_above":0, "completed":[43823628,43823659,43823660], "content_codes":[], "category_codes":[{"category_code":1,"count":3}]} ]}, "uuid":"8711336c-ddcd-432f-b388-8b3940ce151a", "session_id":"d14fbee3-0a7a-4026-9fbf-d90eb62d0e73", "session_sequence_number":5113, "version":"2.0.0", "installation_id":"a031v00001Bex7fAAB", "local_installation_timestamp":"2024-07-10T07:35:00.0000000+02:00", "date":"2024-07-10", "app_server_timestamp":"2024-07-10T07:27:28.8839856+02:00", "event_type":"STOCK_AND_PILE"}   I eventually need each “stock_id” ending up as an individual event, and keep the common information along with it like: timestamp, uuid, session_id, session_sequence_number and event_type. Can someone guide me how to use props and transforms to achieve this? PS. I have read through several great posts on how to split JSON arrays into events, but none about how to keep common fields in each of them. Many thanks in advance. Best Regards, Bjarne
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not... See more...
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not sure what is wrong, could anyone help?
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table.... See more...
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table. I'll attach my code snippet below:    { "dataSources": { "dsQueryCounterSearch1": { "options": { "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttrSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(resolved)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "episodesBySeveritySearch": { "options": { "query": "|`itsi_event_management_episode_by_severity`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "noiseReductionSearch": { "options": { "query": "| `itsi_event_management_noise_reduction`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "percentAckSearch": { "options": { "query": "| `itsi_event_management_get_episode_count(acknowledged)` | eval acknowledgedPercent=(Acknowledged/total)*100 | table acknowledgedPercent", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttaSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(acknowledged)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" } }, "visualizations": { "vizQueryCounterSearch1": { "title": "Query Counter 1", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0 }, "dataSources": { "primary": "dsQueryCounterSearch1" } }, "episodesBySeverity": { "title": "Episodes by Severity", "type": "splunk.bar", "options": { "backgroundColor": "#ffffff", "barSpacing": 5, "dataValuesDisplay": "all", "legendDisplay": "off", "showYMajorGridLines": false, "yAxisLabelVisibility": "hide", "xAxisMajorTickVisibility": "hide", "yAxisMajorTickVisibility": "hide", "xAxisTitleVisibility": "hide", "yAxisTitleVisibility": "hide" }, "dataSources": { "primary": "episodesBySeveritySearch" } }, "noiseReduction": { "title": "Total Noise Reduction", "type": "splunk.singlevalue", "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorThresholds)", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "context": { "backgroundColorThresholds": [ { "from": 95, "value": "#65a637" }, { "from": 90, "to": 95, "value": "#6db7c6" }, { "from": 87, "to": 90, "value": "#f7bc38" }, { "from": 85, "to": 87, "value": "#f58f39" }, { "to": 85, "value": "#d93f3c" } ] }, "dataSources": { "primary": "noiseReductionSearch" } }, "percentAck": { "title": "Episodes Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "dataSources": { "primary": "percentAckSearch" } }, "mtta": { "title": "Mean Time to Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttaSearch" } } }, "layout": { "type": "grid", "options": { "display": "auto-scale", "height": 240, "width": 1440 }, "structure": [ { "item": "vizQueryCounterSearch1", "type": "block", "position": { "x": 0, "y": 80, "w": 288, "h": 220 } }, { "item": "episodesBySeverity", "type": "block", "position": { "x": 288, "y": 80, "w": 288, "h": 220 } }, { "item": "noiseReduction", "type": "block", "position": { "x": 576, "y": 80, "w": 288, "h": 220 } }, { "item": "percentAck", "type": "block", "position": { "x": 864, "y": 80, "w": 288, "h": 220 } }, { "item": "mtta", "type": "block", "position": { "x": 1152, "y": 80, "w": 288, "h": 220 } } ] } }       I really appreciate your help, have a great day
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract produ... See more...
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract productName but can't extract because value productName not using " " so I'm confused to extract it, I've tried it using the spl command | makemv delim=";" productName but the only result is SHAMPTS JODAC RL MTV 36X(4X60G). the rest doesn't appear. and also using regex with the command | makemv tokenizer="(([[:alnum:]]+ )+([[:word:]]+))" productName but the result is still the same. so is there any suggestion so that the value after ; can be extracted?
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we... See more...
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we can see SA-utils python errors in log files.