All Topics

Top

All Topics

Hello, Why do some dropdowns have a filter box and others don't? Are there options on the <input type="dropdown"...> tag that needs to be set? What other options are available for dropdowns? ... See more...
Hello, Why do some dropdowns have a filter box and others don't? Are there options on the <input type="dropdown"...> tag that needs to be set? What other options are available for dropdowns? Thanks and God bless, Genesius
 does this affect anything typically? I ask this because I have apps that I downloaded from splunkbase and put into /opt/splunk/etc/shcluster/apps and rand the command recomened but thoses app... See more...
 does this affect anything typically? I ask this because I have apps that I downloaded from splunkbase and put into /opt/splunk/etc/shcluster/apps and rand the command recomened but thoses apps arent showing up in apps on any of my SHs in my cluster
By now, you may have heard the exciting news that Edge Processor, the easy-to-use Splunk data preparation tool for filtering, transformations, and routing at the edge, is now Generally Available. Edg... See more...
By now, you may have heard the exciting news that Edge Processor, the easy-to-use Splunk data preparation tool for filtering, transformations, and routing at the edge, is now Generally Available. Edge Processor allows data administrators for Splunk environments the ability to drop unnecessary data, mask sensitive fields, enrich payloads, and conditionally route data to the appropriate destination. Managed via Splunk Cloud Platform but deployed at the customer data edge, Edge Processor helps you control data costs and prepare your data for effective downstream use. Alongside the announcement of the GA of Edge Processor, we are also excited to announce the General Availability of the SPL2 Profile for Edge Processor! The SPL2 Profile for Edge Processor contains the specific subset of powerful SPL2 commands and functions that can be used to control and transform data behavior within Edge Processor, and represents a portion of the entire SPL2 language surface area. In Edge Processor, there are two ways you can define your processing pipelines. The first, which is fantastic for quick and easy pipeline authoring, allows data administrators to take advantage of the point-and-click features of the Edge Processor pipeline editor. From this same pipeline editor experience, users can also opt to directly interact in the SPL2 code editor window for extremely flexible pipeline authoring.  This allows data administrators to directly use Splunk’s SPL2 language to author pipelines via a code editor in a manner familiar to SPL experts. This is extremely exciting, as it allows SPL syntactical patterns to be used for transformations on data in motion! Let’s learn a bit more. What is SPL2?  SPL2 is Splunk’s next-generation data search and preparation language designed to serve as the single entry point for a wide range of data handling scenarios and in the future will be available across multiple products. Users can leverage SPL2 to author pipelines that process data in motion, create and validate data schemas while leveraging in-line tooling and documentation. SPL2 seeks to enable a “learn once, use anywhere” language model across all Splunk features in a manner extremely familiar to SPL users today. SPL2 takes the great parts of SPL - the syntax, the most used commands, the investigation-friendliness, and the flow-like structure - and makes it available for use not only against data at rest (e.g., via splunkd), but also for streaming runtimes. This allows data administrators, developers, and others who are familiar with SPL, but unfamiliar with configuring complex rules in props and transforms, to translate their existing SPL knowledge and apply it directly to data in-motion, via Edge Processor. A template for an SPL2 pipeline that masks IP addresses from the hostname field of syslog data. SPL2 is already used implicitly by multiple Splunk products today under the hood, to handle data preparation, processing, search, and more. Over time, we intend to make SPL2 available across the entire Splunk portfolio to support a truly unified platform.  Customers familiar with SPL will be very pleased to hear that SPL2 has introduced a range of new functionality to more seamlessly support needs for data preparation in-motion, including: Data does not have to be cast from one type to another. SPL2 is a weakly typed language with the option for users to create type constraints (including custom types) where necessary; by default, SPL2 implicitly converts between unrelated types, meaning that casting is no longer required. This allows data administrators to spend less time worrying about field format and schema for incoming data, and more time concentrating on getting the right data to the right place. Source and destination functions, which were highly bespoke, are replaced with datasets. These datasets can be created, permissioned, and managed independently, and map cleanly to locations where you want to read from and write to. This allows data administrators to more granularly control how data is accessed and written, while also promoting easy reusability across pipelines. Metadata about the destination is captured in the dataset configuration rather than the pipeline definition, so you do not have to pass this metadata in the pipeline itself; this results in clean pipeline definitions that can be easily understood and copied. JSON handling can be done seamlessly with a range of JSON manipulation eval functions, rather than ucast or other complex logic.  What is the SPL2 Profile for Edge Processor?  SPL2 supports a wide range of operations on data. The SPL2 profile for Edge Processor represents a subset of the SPL2 language that can be used via the Edge Processor offering. For example: at launch, Edge Processor is primarily built to help customers manage data egress, mask sensitive data, enrich fields, and prepare data for use in the right destination. SPL2 commands and eval functions that support these behaviors are supported in the profile for Edge Processor to ensure a seamless user experience. Learn more about SPL2 profiles and view a command compatibility matrix by product for SPL2 commands and eval functions. How does Edge Processor use SPL2? Edge Processor pipelines are logical constructs that read in data from a source, conduct a set of operations on that data, and then write that data to a destination. All pipelines are defined entirely in SPL2 (either when directly manipulated in the code editor for Edge Processor, or indirectly created via the GUI for pipeline authoring.) SPL2 pipelines define an entire set of transformations, often related to similar types of data. All pipelines must follow this syntax:             $pipeline = from $source | <processing logic> | into $destination;             Take the below Edge Processor pipeline, defined in SPL2:             $pipeline = from $source | rex field=_raw /user_id=(?P<user_id>[a-zA-Z0-9]+)/ | into $destination;             This SPL2 pipeline above can be decomposed into multiple components: $pipeline - this represents the definition of the pipeline statement that will be applied on any given Edge Processor node or cluster. As denoted by the dollar sign ($), it is a parameter, meaning that everything on the right hand side of the assignment (=) is assigned to the left. Note: in case of a very long & complex pipeline, you can decompose your pipeline into segments, like so with this pseudo-code SPL2:             $pipeline_part_1 = from $source | where … | rex field=_raw /fieldA… fieldB… fieldC… $pipeline = from $pipeline_part_1 | eval … | into $destination;​             from $source - indicates that this pipeline should read from a specific dataset, indicated by this dataset variable called $source. This variable can be assigned with a specific dataset representing your data to be processed via the Edge Processor data configuration panel - in this case, the $source is a preconfigured sourcetype you can set up in Edge Processor management pages. rex field… - a regular expression to extract the user_id field from the _raw field. It is important to note that Edge Processor only supports the RE2 regular expression flavor, not PCRE. into $destination - indicates that this pipeline should write into a destination, indicated by this dataset variable called $destination. This variable can be assigned with a specific dataset, such as a Splunk index or S3 bucket, via the Edge Processor data configuration panel. As you can probably tell, there are some differences between the SPL2 here and the SPL you know. The first is that SPL2 allows for not just single expressions, but expression assignments; entire searches can be named, treated as variables and linked together to compose a single dispatchable unit. SPL2 also supports writing into datasets, not just reading from datasets (and with a slightly different syntax). Datasets can be different things - indexes, S3 buckets, forwarders, views, and more. You’ll likely be writing to a Splunk index most of the time. You can find more details about the differences between SPL2 and SPL here. But what if your pipeline isn’t constrained to a single sourcetype? For these scenarios, you can instead read from a specific dataset called all_data_ready (the consolidation of all Edge Processor ingress data) and apply any sourcetype logic that you’d like:             $pipeline = from $all_data_ready | where sourcetype=”WMI:WinEventLog:*” | rex field=_raw /user_id=(?P<user_id>[a-zA-Z0-9]+)/ | into $destination;             where sourcetype=”WMI:WinEventLog:*” - this is a filter that takes the data that is piped in, and only keeps events matching this specific sourcetype. The rest of this pipeline will only operate on this sourcetype. How does SPL2 make data preparation simpler? You may have begun to see that SPL2 is not just a set of commands and functions, but also core concepts underneath that can enable powerful data processing scenarios. In fact, Edge Processor ships out-of-box SPL2 pipeline templates to address some canned data preparation use cases: Beyond these templates, let’s walk through a few examples that highlight how SPL2 makes data preparation simpler. I want to logically separate components of complex, multi-stage pipelines. SPL2 allows pipelines to be defined in multiple stages, for ease of organization, debugging, and logical separation. Using the statement assignments as variables later in the SPL2 module allow data admins to modularly compose their data preparation rules.             $capture_and_filter = from $all_data_ready | where sourcetype=”WinEventLog:*”                         $extract_fields = from $capture_and_filter | rex field = _raw /^(?P<dhcp_id>.*?),(?P<date>.*?),(?P<time>.*?),(?P<description>.*?),(?P<ip>.*?),(?P<nt_host>.*?),(?P<mac>.*?),(?P<msdhcp_user>.*?),(?P<transaction_id>.*?),(?P<qresult>.*?),(?P<probation_time>.*?),(?P<correlation_id>.*?),(?P<dhc_id>.*?),(?P<vendorclass_hex>.*?),(?P<vendorclass_ascii>.*?),(?P<userclass_hex>.*?),(?P<userclass_ascii>.*?),(?P<relay_agent_information>.*?),(?P<dns_reg_error>.*?)/                         $indexed_fields = from $extract_fields | eval dest_ip = ip, raw_mac = mac, signature_id = msdhcp_id, user = msdhcp_user                         $quarantine_logic = from $indexed_fields | eval quarantine_info = case(qresult==0, "NoQuarantine", qresult == 1, "Quarantine", qresult == 2, "Drop Packet", qresult == 3, "Probation", qresult == 6, "No Quarantine Information")                         $pipeline = from $quarantine_logic | into $destination             As you can see above, we’ve defined four processing “stages” of this pipeline: $capture_and_filter, $extract_fields, $indexed_fields, and $quarantine_logic, with each flowing into the next, and of course with $pipeline tying it all together into the destination. When the $pipeline is run, all stages are concatenated behind the scenes, allowing the pipeline to work as expected while maintaining a degree of logical segmentation and readability.  I have a complex nested JSON event that I want to easily turn into a multivalue field and then extract into multiple events. If you’ve ever worked with JSON in Splunk, you know that it can be…tricky. It’s a never ending combination of mvindexes, mvzips, evals, mvexpands, splits, and perhaps even using SEDCMD in prop.conf. With SPL2, it’s easier than ever, with the expand() and flatten() commands! Often used together, they can be used to first expand a field that contains an array of values to produce a separate result row for each object in the array, then flatten the key-value pairs in the object into separate fields in an event, repeating as many times as necessary. Let’s take this JSON passed as a single event as an example, and assume it is represented by a dataset named $json_data. We want to create the timestamp at index time (that was previously missing) and extract each nested stanza into an event:             { "key": "Email", "value": "john.doe@bar.com" }, { "key": "ProjectCode", "value": "ABCD" }, { "key": "Owner", "value": "John Doe" }, { "key": "Email", "value": "jane.doe@foo.com" }, { "key": "ProjectCode", "value": "EFGH" }, { "key": "Owner", "value": "Jane Doe" } }             By itself and without preparation, we’re being passed a single event with the fields stuck in the JSON body. But, we can write the following SPL2 to easily flatten this JSON and timestamp it:             $pipeline = FROM $json_data as json_dataset | eval _time = now() | expand json_dataset | flatten json_dataset | into $destination             Which should result in extraction of this JSON event into multiple events with fields, like so: Getting started with SPL2 in Edge Processor SPL2 within Edge Processor is extremely powerful, and this blog post only scratches the surface! If you’re interested in learning more about SPL2 or the SPL2 Profile for Edge Processor, join in! Reach out to your account team to get connected, or start a discussion in splunk-usergroups Slack . Ignore - staging some unrelated text I always want SEV HIGH and SEV MEDIUM events from all AWS applications to be routed to my “alerts” Splunk index, and all SEV LOW events to be routed to my “low_sev_s3” AWS S3 bucket. All events without an attached severity level should default to an “audit_s3” AWS S3 bucket. To achieve this, you can use the same logic from above - stringing multiple statements together to create a mega-pipeline - to create individual smaller pipelines routing from the same dataset.             $pipeline = from $source … | <rex> | eval… | branch [where Severity=”HIGH” | into $alerts], [where Severity=”MEDIUM” or Severity=”LOW” | into $low_sev_s3], [where Severity !=”HIGH” and Severity !=”MEDIUM” and Severity !=”LOW” | into $audit_s3]             Using branching in this manner, combined with the custom logic and multiple destinations, allows for this to be seamlessly represented in SPL2!  
We get it - not only can it take a lot of time, money and resources to get data into Splunk, but it also takes effort to shape the data in a way that will provide you the most value.  But it doesn’t ... See more...
We get it - not only can it take a lot of time, money and resources to get data into Splunk, but it also takes effort to shape the data in a way that will provide you the most value.  But it doesn’t have to anymore, thanks to Splunk’s latest innovation in data processing.    Splunk is pleased to announce the general availability of Splunk Edge Processor, a service offering within Splunk Cloud Platform designed to help customers achieve greater efficiencies in data transformation close to the data source, and improved visibility into data in motion. Edge Processor provides customers new abilities to filter and mask, and otherwise transform their data, before routing it to supported destinations. Edge Processor joins Ingest Actions as part of Splunk’s pre-ingest data transformation capabilities.  All current Edge Processor features are free to all Splunk Cloud customers. What gives Edge Processor its data transformation power is Splunk’s next generation data search and preparation language, SPL2. With SPL2, customers have much more flexibility to shape data so that it is formatted exactly how they want before sending it to be indexed. Unique to Edge Processor is its architecture, chiefly the cloud-based control plane. Edge Processor nodes are easily installed and configured on customer servers or customer cloud infrastructure using a single command, and managed completely from Splunk Cloud Platform. These nodes are an intermediate forwarding tier, and receive data from edge sources. Customers manage their entire fleet of edge processors and have visibility into both inbound and outbound data volumes through their edge processor network, all from a single place. Any node can then scale horizontally to handle increasing processing or data volume requirements by simply adding instances. Customers have detailed metrics to view the impact of their pipelines on data flowing through each of their edge processors and can closely track unexpected spikes or troughs in their data. From the central cloud control plane, customers define data processing logic – pipelines – that dictate their desired filtering, masking and routing logic, and can apply their pipelines to any or all edge processors in their network. Edge Processor pipelines are constructed using SPL2 in the new pipeline editor experience, where users can see previews of the data showing the impact of applying a pipeline before making a change.  The data plane remains completely within the customer control - customers point data sources to an edge processor node that is installed on their hosts, and that data is only sent to where customers direct it to be sent. At launch, Edge Processor can receive data from Splunk Universal and Heavyweight Forwarders, and route data to Splunk Enterprise, Splunk Cloud Platform, and Amazon S3. Customers have a guided pipeline editor experience with the ability to preview the effect of their pipeline on sample data that they provide. Edge Processor using SPL2 makes data transformation easy and flexible. One of the most common use cases for Edge Processor is to filter verbose data sources, such as  Windows event logs, to retain selected events or content within an event.   An explicit set of examples for this use case  is retaining only Windows events that match a certain event code, masking the extensive message field at the end of Windows events, and routing an unfiltered copy of data to an AWS S3 bucket. The pipelines below show how these examples are constructed; the user controls what data the pipeline applies to, how that data is to be processed, and then where the processed data is routed to.   Pipeline definition (SPL2) $source $destination Filter Windows system events on event id, route to Splunk Cloud index “Security” $pipeline =  | from $source // Extract event code field | rex field=_raw /EventCode=(?P<event_code>\d+)/  // retain all events with windows event code = 9 | where event_code = 9 | into $destination; sourcetype = winEventLog: system Splunk index: Security Mask Windows system events to remove the final “Message” contents, route this copy to Splunk Cloud index “Main” $pipeline =  | from $source  | eval _raw=replace(_raw, /(Message=.*[\r\n?|\n])((?:.|\r\n?|\n)*)/, "\\...")  | into $destination; sourcetype = winEventLog: system Splunk index: Main Route unfiltered copy of ALL Windows events to AWS S3 bucket “Windows” $pipeline =  | from $source  | into $destination; Sourcetype = winEventLog* S3 bucket: Windows With Edge Processor, customers will experience increased visibility of data in motion and improved productivity, simplicity, and control of data transformations, all at scale. What’s more, Edge Processor is another capability to help customers manage costs and boost value from your Splunk investment, serving as a sort of forcing function to organize and prioritize your data according to use case so that you work with just the data you want, in the location you need it.  If you are a current Splunk Cloud Platform customer hosted in the US or Dublin Splunk Cloud regions, you can get access to Edge Processor today.  Contact by your Splunk sales representative, or send an email to EdgeProcessor@splunk.com  with your company name, Splunk cloud stack name, and Splunk Cloud region. If you are a Splunk Cloud Platform customer hosted in other Splunk Cloud regions, also contact your Splunk sales representative or send an email to get on the list to be enabled once Edge Processor is available in your region.   For more about Edge Processor, including release plans to support additional sources, destinations, and new functionality, see release notes and documentation..   -Courtney Wright - Senior Product Marketing Manager, Platform
Hello Everyone, I am trying to find outliers in connection duration on a specific subnet but having trouble getting the outliers part to show any results. I want to get avg duration of all traffic ... See more...
Hello Everyone, I am trying to find outliers in connection duration on a specific subnet but having trouble getting the outliers part to show any results. I want to get avg duration of all traffic connections from a subnet (or list of IPs) by sourceIP and application. So I am grabbing the average of connections in a 15m bin. After evaluating the outliers I want to display the time bin, sourceIP, application, AvgDuration and Outlier I have tried following 2 queries till now and neither gives results when I try to get the results: 1. index=firewall sourceip=10.0.0.1/24 | bin span=15m _time | stats avg(duration) AS AvgTotal by sourceip, _time, app | eval outlier=if(duration>AvgTotal*3,1,0) | table _time sourceip app AvgDuration outlier 2. index=firewall sourceip=10.1.11.1 | timechart span=15m avg(duration) AS AvgDuration by sourceip, _time, app | eval outlier=if(duration>AvgDuration*3,1,0) | table _time sourceip app AvgDuration outlier This is just a test query I am trying, with plans to build on it. I think there something wrong in how I am calling the table. What am I doing wrong in the 2 queries?
FW: [ DOC 45 ] DTP: DEMO XXX CCC | 20147 I want to extract number after pie as field name "data".  what is the regex?
Hello, I'm working on dashboard studio. I have a drop-down to choose a store and show chart related to this store. For the drop-down, the label is the name of the store and the value (use in sear... See more...
Hello, I'm working on dashboard studio. I have a drop-down to choose a store and show chart related to this store. For the drop-down, the label is the name of the store and the value (use in search) is the number. I want to put a title with the name (label) of the store, but if I use the token, it's the value (number of store) which is printed. Somebody knows how to print the label (and not the value) with Dashboard studio ?
Hi, I am formatting data as required and getting it in below format. Now I want to calculate average of only highlighted fields in green color i.e. Q1_score PREPAID,Q2_score PREPAID,Q1_score CONSU... See more...
Hi, I am formatting data as required and getting it in below format. Now I want to calculate average of only highlighted fields in green color i.e. Q1_score PREPAID,Q2_score PREPAID,Q1_score CONSUMER so on Example Q1_score CONSUMER ,Count by segment value should be 4.50 This is last piece of my query     | addcoltotals COUNT* Q1* Q2* Q3* Total | eval Month=coalesce(Month, "Count by Segment")        Please suggest
So, I wanted to Split the path into multiple events so that i can count whatever i want to count like active or dev or usa or etc. We have few path i.e below path=/dev/site/usa/active path=... See more...
So, I wanted to Split the path into multiple events so that i can count whatever i want to count like active or dev or usa or etc. We have few path i.e below path=/dev/site/usa/active path=/prod/site/usa/inactive path=/dev/site/Germany/cleaning path=/qa/site/Austria/maintenancemode   So now i want to count each of active by usa, dev then I want to get the top 5 counts of it. In the results i want to see the bar graph like active  cleaning  maintenancemode instead of whole path.  Note: I don't have backend access. 
Hello -  I have a table with the following: host HOST FQDN DNS_NAME HOST_MATCH INDEX hostalpha hosta.mydomain.com hosta false index_a hosta host - true... See more...
Hello -  I have a table with the following: host HOST FQDN DNS_NAME HOST_MATCH INDEX hostalpha hosta.mydomain.com hosta false index_a hosta host - true index_b Created from the following search: base_search | rex field=FQDN ""^(?<DNS_NAME>[^.]+)\..*$" | fillnull value="-" DNS_NAME |eval HOST_MATCH="if(host='DNS_NAME',"true","false") How would I replace the do the following: 1.  If HOST != DNS_NAME, Make HOST = DNS_NAME 2.  If DNS_NAME = "-" MAKE DNS_NAME = HOST Thanks!
Hello, I would like to uninstall Splunk on my windows machine, do i need to stop the service first and then uninstall the program from control panel or directly uninstall it? Can some one please ... See more...
Hello, I would like to uninstall Splunk on my windows machine, do i need to stop the service first and then uninstall the program from control panel or directly uninstall it? Can some one please help me with it?     Thanks
We had an EC2 instance become inaccessible via the AWS Session Manager. Root cause was the main volume filling-up with various splunkfowarder-x.x.x RPM files in /usr/bin/ Yesterday the filesystem... See more...
We had an EC2 instance become inaccessible via the AWS Session Manager. Root cause was the main volume filling-up with various splunkfowarder-x.x.x RPM files in /usr/bin/ Yesterday the filesystem was cleaned-up, but today there's another copy of that RPM in the /usr/bin/ directory. Does anyone know why is this happening ?
So I couldn't find anything in splunk community that answers my question about pushing an update to a lookup table. I manually updated the .csv file through the backend searchhead server. I deleted a... See more...
So I couldn't find anything in splunk community that answers my question about pushing an update to a lookup table. I manually updated the .csv file through the backend searchhead server. I deleted a line and replaced it with another hostname.    When i run the command:       |inputlookup dns_hosts.csv| stats count by host|eval count=0|join host type=outer [ search index="dns"|stats count by host]|fillnull|where count=0|fields host count       Im still getting the host that has a count of 0, the host that i removed in the csv file. My question is do i need to restart the searchhead to push that change? I didnt change any config files, just the lookupfile under the specific app directory's lookup file folder. I wasnt sure if splunk would automatically read the updated file after a certain amount of time, or if i needed to restart the server for it to take effect? And will that file replicate across all searchheads after I restart it?  Thank you for any guidance. 
Hi all, I have one question: I upgraded my Splunk deployment from 8.1.6 to 9.0.4. Deployment is: 3-nodes SH cluster, 3-nodes IDX cluster, 2 x HF, MC, SHC-D, CM, LM, DS. After upgrade I noticed on... See more...
Hi all, I have one question: I upgraded my Splunk deployment from 8.1.6 to 9.0.4. Deployment is: 3-nodes SH cluster, 3-nodes IDX cluster, 2 x HF, MC, SHC-D, CM, LM, DS. After upgrade I noticed one thing about queues on Monitoring Console. Before upgrade, all queues on all IDXs have 0% fill: But after upgrade, there is small fill (average about 5%, up to 10%) on Typing an Indexing queue: From my point of view it is strange, because nothing changed during upgrade - HW is the same, amount of ingested data is the same, kind of data is the same, no new log source etc. I search through documentation, but did not find anything relevant. So I would like to ask: what happens? Can it be ignored safely or there is really something wrong inside Splunk? Some config changes required because of some internal changes in Splunk? Could you share your experience with that, if you have one? Thank you in advance for any hint or glue. Best regards Lukas Mecir
Hello Splunkers, I would like to have to set an alert if a sudden high amount of events are received.  I have this base search: index=_internal source="*metrics.log" eps "group=per_source_thrup... See more...
Hello Splunkers, I would like to have to set an alert if a sudden high amount of events are received.  I have this base search: index=_internal source="*metrics.log" eps "group=per_source_thruput" NOT filetracker | eval events=eps*kb/kbps | timechart fixedrange=t span=1m limit=5 sum(events) by series So I have the number of events by a source per minute.  I like to trigger an alert if there are more than X events in 5 consecutive minutes from one source. Thanks for your hints in advance
Hey,  I would like to configure a webhook to send Meraki (Cisco) alarms to Splunk-On-Call.  There isn't a dedicated 3rd party integration for this, and the "REST" - generic isn't working with it.... See more...
Hey,  I would like to configure a webhook to send Meraki (Cisco) alarms to Splunk-On-Call.  There isn't a dedicated 3rd party integration for this, and the "REST" - generic isn't working with it.  Is there any way to add Meraki to the 3rd party integrations or is there any way to make it work?  Thanks in advance 
Hi, I have a query which gives a table of results. Now instead of exporting the table, I need to export the raw events itself. How can I do that? Instead of exporting 9980 values, I need to expor... See more...
Hi, I have a query which gives a table of results. Now instead of exporting the table, I need to export the raw events itself. How can I do that? Instead of exporting 9980 values, I need to export the whole 16882 events Any help would be appreciated!
I have created a bar chart with y axis of status count which are new and closed but its displaying like first closed bar block then new bar block. But now i have it to be first new and then closed. H... See more...
I have created a bar chart with y axis of status count which are new and closed but its displaying like first closed bar block then new bar block. But now i have it to be first new and then closed. How to do it?  
Hello Splunkers!! I have mentioned below query and from the below query I want a results as shown below in the excel. Please help me achieve that result. index=ABC sourcetype=ABC | eval date_yea... See more...
Hello Splunkers!! I have mentioned below query and from the below query I want a results as shown below in the excel. Please help me achieve that result. index=ABC sourcetype=ABC | eval date_year=strftime('_time',"%Y"), date_month=strftime('_time',"%B"), day_week=strftime('_time',"%A"), date_mday=strftime('_time',"%d"), date_hour=strftime('_time',"%H"), date_minute=strftime('_time',"%M") | stats count count(eval(ShuttleId)) as total by sourcetype | table sourcetype total | join max=0 type=outer sourcetype [| search index=ABC sourcetype=ABC | eval date_year=strftime('_time',"%Y"), date_month=strftime('_time',"%B"), day_week=strftime('_time',"%A"), date_mday=strftime('_time',"%d"), date_hour=strftime('_time',"%H"), date_minute=strftime('_time',"%M") | stats count by ShuttleId sourcetype _time] | table ShuttleId count total | eval condition =if(round((count/total),2) <=0, "GREEN", "RED") | eval Status =round((count/total),2) | eval Shuttle_percentage = round(((count/total)*100),2) | table ShuttleId Shuttle_percentage   _time ShuttleId Total_Orders Errors 2022-08-03T00:00:00.000+0000 Shuttle_001 69341 117 2022-08-04T00:00:00.000+0000 Shuttle_002 85640 51 2022-08-05T00:00:00.000+0000 Shuttle_003 72260 43 2022-08-06T00:00:00.000+0000 Shuttle_004 60291 22 2022-08-07T00:00:00.000+0000 Shuttle_005 0 0  
Hi all, Is it currently possible to somehow create a conditional macro expansion? For example, I have different list of hosts and wanted to expand base the macro argument. `myhosts(old)` would... See more...
Hi all, Is it currently possible to somehow create a conditional macro expansion? For example, I have different list of hosts and wanted to expand base the macro argument. `myhosts(old)` would expand to host=hostname1 OR host=hostname2 `myhosts(new)` would expand to host=hostname3 OR host=hostname4 I looked into different functions to somehow implement it but could not find a solution Thank you.