All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, Recently i have integrated one zipped log file. Daily, at a particular time , the log will get updated with few additional log lines. But , in splunk , the logs are ingesting so strangely. ... See more...
Hi All, Recently i have integrated one zipped log file. Daily, at a particular time , the log will get updated with few additional log lines. But , in splunk , the logs are ingesting so strangely. For example, on 25/9/20, the log lines were 10, and in splunk i saw the same log count. On 26/9/20, additional 5 lines were appended and the log file got updated. Ideally, these 5 lines should ingest into splunk. But that is not happening real case. REAL BEHAVIOR OF SPLUNK UF agent: On 25/9/2020, log line count in server is 10, splunk count is 10. On 26/9/20, 5 lines were added and log line count in server is 15 but in splunk, it reindexed the file again and indexes all the 15 log lines again in splunk. Now the total count since 25/9/2020 is 15+10=25.(ideally, it should contain 15 logs only) On 27/9/20, 5 lines were added and log line count in server is 20 but in splunk, it reindexed the file again and indexes all the 20 log lines again in splunk. Now the total count since 25/9/2020 is 20+(15+10)=45.(ideally, it should contain 20 logs only) So, the first indexed log is seen thrice and the latest indexed log is seen once. likewise, daily the ingestion is getting multiplicated. Here, the log file is not rotating, all the new log lines are appended in the same log file and available in zipped format in the server. Can someone please help me to fix this ingestion behaviour. Please!!!
I have an Splunk app named "E-Bark", when I run a query in search bar in this app I can see the results but not when I run that same query in "Search & Reporting".  I want to visualize the results i... See more...
I have an Splunk app named "E-Bark", when I run a query in search bar in this app I can see the results but not when I run that same query in "Search & Reporting".  I want to visualize the results in Grafana but Grafana is fetching data from indices corresponding to "Search & Reporting" only. Do I need to make any change to the search query?
I have a dashboard used for generating data for reports. Since its initial build, I've been going back and revamping the structures to use tstats commands which have sped up the queries tenfold. Its ... See more...
I have a dashboard used for generating data for reports. Since its initial build, I've been going back and revamping the structures to use tstats commands which have sped up the queries tenfold. Its taken some unique approaches to most of the queries but I have managed to do it to all queries (roughly 13-14) except one. This particular one is having to work with data that was parsed into an array/multivalue. The final table has multiple of these multivalue fields and each should align correctly. This is important to mention because I've found many of the commands I have used to quicken other searches sort by default or clump together which I don't want. So for an example output, I desire this as my final solution. Desired Result (note I actually have like 6 of these array fields in my data) ID 1 ID 2 Data 1 Data 2 1 12 300 400 200 1000 2 3 5 4 1 18 500 100 200 400 150 3 4 5 1 1 For some of these cases I have to perform math in my multivalue to convert it from one type to another (think something like MB to GB) and in other cases I have to perform a lookup via a range which requires a map command via the inputlookup command (reference this ticket here: https://community.splunk.com/t5/Splunk-Search/Lookup-With-Value-Between-Two-Lookup-Fields/m-p/517298). Through much trial and error I have found I have to handle each multivalue separate to conduct the math or lookup because I cant recombine it without messing up the other multivalues.  I have a working version with the join command through the basic search but its 3 joins deep to get all the math and lookup stuff. Because its 3 joins with a search command this takes forever to load. I have similar tables with many joins or maps I managed to quicken with tstats, but they don't have the issue of ensuring the data stays aligned (no sorting). I have found that through a initial search to pull the IDs with tstats, and then using those for map commands to pull my data I can then take the first result with the head command, quickening it greatly. So something like this:   | tstats count where index=blah source=*blah* groupby ID1, ID2, timestamp | dedup ID1, ID2 sortby -timestamp | fields - count | map search="| search index=$index$ source=$source$ ID1=$ID1$ ID2=$ID2$ timestamp=$timestamp$ | head 1 | fields Data1 | mvexpand Data1 | do some math or something here | stats list(Data1) AS Data1 | eval index=$index$, source=$source$, ID1=$ID1$, ID2=$ID2$, timestamp=$timestamp$" | map search="| search index=$index$ source=$source$ ID1=$ID1$ ID2=$ID2$ timestamp=$timestamp$ | head 1 | fields Data2 | mvexpand Data2 | do some math or something here | stats list(Data2) AS Data2 | eval index=$index$, source=$source$, ID1=$ID1$, ID2=$ID2$, timestamp=$timestamp$, Data1=$Data1$"   This works great! until I start handling the multivalues and trying to pass them forward when populating the data. So the minute I try passing on Data1, the map command encapsulates it in a string like "500 100 200 400 150" and it doesn't seem to want to make it back into a multivalue with makemv, split, or anything. It reads it all as one and I cant seem to break it back to the multivalue list.  ID 1 ID 2 Data 1 Data 2 1 12 300 400 200 1000 2 3 5 4 1 18 500 100 200 400 150 3 4 5 1 1 I've tried the subsearch thing but apparently that only works in the top because trying to do eval Data1=[| makeresults | Data1=$Data1$ | return $Data1 ] fails. So is there something I'm missing or a command that would solve this? This would be so much quicker for us if I could get this to work and this is showing a lot of promise except that piece.
Hello -  I would like to decrease the space between the checkbox and the submit button [space between the red].  How might I do that? Thanks in advanced.
Hi,  I have a search ending like this :  | chart count over service by environment | where prod>50 OR OR dev>50  It returns me a table :  service prod dev AAA 16 110 BBB 225 0 ... See more...
Hi,  I have a search ending like this :  | chart count over service by environment | where prod>50 OR OR dev>50  It returns me a table :  service prod dev AAA 16 110 BBB 225 0   The problem is that I want to mask the 16 and 0 results and keep only results above 50.  How can I do that ?  Thanks !
Hi, My team will be performing an upgrade from Splunk Cloud. We need to understand how all of our artifact types change before and after the upgrade: Lookup Tables, Macros, datamodels, saved search... See more...
Hi, My team will be performing an upgrade from Splunk Cloud. We need to understand how all of our artifact types change before and after the upgrade: Lookup Tables, Macros, datamodels, saved searches, etc. Therefore, we are implementing the following process. 1. Pre-upgrade - create a rest based search of all artifacts and pipe it to a LUT | rest /servicesNS/-/-/admin/macros count=0 splunk_server=local |outputlookup acc_macro_schema.csv 2. Post-upgrade - create a search that compares all fields in the new rest based search to the information in the pre-upgrade lookup table and only returns the field values of an artifact that has changed post-upgrade. See beginning of query below | rest /servicesNS/-/-/admin/macros count=0 splunk_server=local | eval test_source="After" | append [ | inputlookup acc_macro_schema_test.csv | eval test_source="Before"] Has anyone created a search to accomplish this goal?
we want to detect the multiple events together, for example, we want to find out those events which have event 4741 and event 4743 happen together. scenario 1: at certain time (2020.3.20 18:00:00)  ... See more...
we want to detect the multiple events together, for example, we want to find out those events which have event 4741 and event 4743 happen together. scenario 1: at certain time (2020.3.20 18:00:00)  both 4741 and 4743 happen together Scenario2: the interval between 4741 and 4743 is short (less than 2 second) how to define SPL for these two scenarios, do we need correlation search?
Hey, I am trying to work with lookup table where input contains 3 fields (A,B,C) and output is D Lookup table structure: A B C D a b   d   Here is my configuration: props.conf: ... See more...
Hey, I am trying to work with lookup table where input contains 3 fields (A,B,C) and output is D Lookup table structure: A B C D a b   d   Here is my configuration: props.conf: LOOKUP-result = lookup_table A B C OUTPUT D transform.conf: [lookup_table] filename...   When I run query where there is no field named C (for example: A=a, B=b), the returned output is "a" which is not what I expect it to be "d". What am I missing here? How can I fix it?
Hello I have an API integration with my HF that gest data and then the HF forwards that data to the indexers. I need to declare an index for this data on the HF but the path isnt the same as it is w... See more...
Hello I have an API integration with my HF that gest data and then the HF forwards that data to the indexers. I need to declare an index for this data on the HF but the path isnt the same as it is when it gets to the indexer because the HF storage isnt mounted the same. Basically: This is what the Indexer indexes.conf looks like:       [index_name] coldPath = volume:cold/index_name/colddb homePath = volume:primary/index_name/db thawedPath = $SPLUNK_DB/index_name/thaweddb         What I need to add to the HF would look like this:       [index_name] coldPath = $SPLUNK_DB/index_name/colddb homePath = $SPLUNK_DB/index_name/db thawedPath = $SPLUNK_DB/index_name/thaweddb         Since the HF isnt indexing data does it matter if I create the index on the HF like this even though thats not what the indexer indexes.conf look like?    The reason I have to add the index is for the Dell EMC Isilon Add-on for Splunk Enterprise app, you have to declare the index in the setup of the addon. Thanks!      
Hello,  I have a pie chart that shows me the count of all the alerts I get on my system, stats by severity. Because I couldn't find a way to show the count (only the percentage) then I changed the ... See more...
Hello,  I have a pie chart that shows me the count of all the alerts I get on my system, stats by severity. Because I couldn't find a way to show the count (only the percentage) then I changed the name so it will contain both the severity name and the count (used eval function). before I changed the name I used charting.fieldcolores to set a specific color to each severity(slice), now that the name constantly changing because the count is changing I wasn't able to set a color to each slice. I tried using charting.seriescolors but when I use that I cant determine that the critical severity will always get a red color and the warning gets orange and so... because the color is not set to a specific slice.  I wanted to ask if there is a way I can set the colors regularly to a name when I don't have the full name of the slice, maybe using a (*) like- "Critical*" and then the slice that contain the word critical will always color in red? If it is not possible, is there a way to show the count and not the percentage! without using eval and changing the slice name?   all answers will help! thanks!!!  
Hi All - Just want to ask if anyone here knows or encountered the same issue we are encountering on our Splunk cloud instance for both dev and production. We have a splunk addon for salesforce, but ... See more...
Hi All - Just want to ask if anyone here knows or encountered the same issue we are encountering on our Splunk cloud instance for both dev and production. We have a splunk addon for salesforce, but the data ingestion is delayed for 2 days always. Does anyone know how to fix this and how to fix this? Thank you in advance.
Hi all!  I have been trying to compare a search with a CSV lookup table. So far no luck... The list contains only 1 column with usernames. For example:   username user_Apha us... See more...
Hi all!  I have been trying to compare a search with a CSV lookup table. So far no luck... The list contains only 1 column with usernames. For example:   username user_Apha user_Beta user_Charlie user_Delta    Now this list is used to verify if users who are not in the company still logged in (the list is updated daily) but I can't seem to make it work. This is the search I have so far    index="wineventlog" source="WinEventLog:Security" action=success EventCode=4624 OR 4768 | lookup disabled_account_list username OUTPUT username AS Disabled_User | where user = username | table Time username   I assume that it is completely wrong but I am out of ideas about how to correct it.  Thank you very much, Sasquatchatmars
Hi, How to check technology inventories like new asset added,new software added,storage total attached ,memory of each hosts for last 30 days.. ? Also, how to get Splunk volume usage and Top 10 app... See more...
Hi, How to check technology inventories like new asset added,new software added,storage total attached ,memory of each hosts for last 30 days.. ? Also, how to get Splunk volume usage and Top 10 apps for every month/quarter ?  
I have the following query used to build a chart. Sometimes, the incoming events do not have the fields set. How could these events with null could be excluded in a Subsearch? index=prod | search pr... See more...
I have the following query used to build a chart. Sometimes, the incoming events do not have the fields set. How could these events with null could be excluded in a Subsearch? index=prod | search processRelevantFields.processName="SessionExecution"|search prod.customerId=* prod.productId=* | timechart dc(customer.ciamId) as "Active Users" I have tried with "search <fieldName> =*" as given above. But this is not working. Please guide on how this could be implemented?
Hello, I am using Splunk Enterprise 7.3.2. and I have structured event data within an events index that I am trying to convert into metrics data so that I can store it in a metrics index.  I am basi... See more...
Hello, I am using Splunk Enterprise 7.3.2. and I have structured event data within an events index that I am trying to convert into metrics data so that I can store it in a metrics index.  I am basing my analysis on the following topic: Get metrics in from other sources. I've managed to create a search that converts my event data into the format that is required by the metrics_csv sourcetype, after which I run the collect command to push the data: | collect index="metrics_index" sourcetype="metrics_csv" One thing to note is that when I rename my metric value field to _value, the field disappears from the statistics table. Once the search has completed I am unable to access that data using mstats and mcatalog commands on the metrics index. Is what I am trying to do possible? To test whether the format was correct I exported the search results and indexed them by hand.  This worked. Thank you and best regards, Andrew
Hi All, Below is my log data 2020-09-30T05:15:41.732035345Z app_name=api environment=2 ns=ab-c2 integrationType=PULL_GR_FILE_UPLOAD, integrationType=LR_JSON, callbackConfig, integrationType=PUSH_S3... See more...
Hi All, Below is my log data 2020-09-30T05:15:41.732035345Z app_name=api environment=2 ns=ab-c2 integrationType=PULL_GR_FILE_UPLOAD, integrationType=LR_JSON, callbackConfig, integrationType=PUSH_S3_GRS I made the search query like this: <query>index=abc ns=ab app_name=ui|stats count by integrationType</query> The issue I am facing is I am only getting first IntegartionType that is "PULL_GR_FILE_UPLOAD" and its count. Its not taking other integrationType. The log contains 2-3 integration type for a particular date. Can someone guide me where I am going wrong. Attached is the screenshot.  
I have a csv with data as below-  Timestamp  Total Capacity  Used Capacity  Available Capacity  Percentage Used 9/30/2020 11:11  209.34 TB  201.46 TB  7.88 TB 96.24% my inputs.co... See more...
I have a csv with data as below-  Timestamp  Total Capacity  Used Capacity  Available Capacity  Percentage Used 9/30/2020 11:11  209.34 TB  201.46 TB  7.88 TB 96.24% my inputs.conf is - [monitor://F:\Storage\Tools\CTW\MSA\MSA_FSutilization.csv] sourcetype=csv_use_current_date disabled=false index=storage crcSalt=<SOURCE> props.conf - [csv_use_current_date] DATETIME_CONFIG = CURRENT HEADER_FIELD_LINE_NUMBER = 1 INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = 1 fields are not extracted properly, getting full data in single field- Any Suggestion ??
Hi, I have a list of all notable events which triggered in X days using this SPL:   index=notable search_name="*Rule" orig_action_name=notable | stats count by search_name     Using this query ... See more...
Hi, I have a list of all notable events which triggered in X days using this SPL:   index=notable search_name="*Rule" orig_action_name=notable | stats count by search_name     Using this query I can see the list of my all rules which are enabled to trigger notables:    | rest /services/saved/searches | search title="*Rule" action.notable=1 | table title   Obviusly, the second search returns much larger list. I'd like to correlate those two searches to find out which of all the rules did not dispatch a notable in past X days. Any ideas on how to achieve this?
Is there a way to get the difference between column A and column B and output in column C Column A.          Column B.          Column C apple                    pear                     orange or... See more...
Is there a way to get the difference between column A and column B and output in column C Column A.          Column B.          Column C apple                    pear                     orange orange                 apple pear
Hi, I'm trying to use https://skyvector.com in an iframe using this code. The $url$ token comes from another panel, it contains the skyvector url. <row rejects="$show_main$"> <panel> <html> <iframe... See more...
Hi, I'm trying to use https://skyvector.com in an iframe using this code. The $url$ token comes from another panel, it contains the skyvector url. <row rejects="$show_main$"> <panel> <html> <iframe src="$url$" width="1500" height="700">&gt;</iframe> </html> </panel> </row> The page shows in an iframe with this message: "If you can read this, enable javascript. It may also help to reload this page by pressing F5, Ctrl+R, Command+R or clearing your browser cache."  When I right click the frame in Firefox and then "This Frame -> Show Only This Frame" or "Open Frame in New Tab" the page is shown correctly. As is the case in this iframe tester http://www.tinywebgallery.com/blog/advanced-iframe/free-iframe-checker I think it's an issue with allowing use of javascript in embedded pages, tried web.conf but can't find a setting for this. Any ideas? Bart.