All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @CMEOGNAD , at first, I suppose that you know that you must have the can_delete role associated to your user. Then, I suppose that you know that this is a logical not a physical removing, in oth... See more...
Hi @CMEOGNAD , at first, I suppose that you know that you must have the can_delete role associated to your user. Then, I suppose that you know that this is a logical not a physical removing, in other words, removed events are marked as deleted but not removed from the buckets until the end of the bucket life cycle. In other words you don't have any useful effect to the removing in terms of storage or license (because they are already indexed). Anyway, I'm not sure that's possible to apply the delete command to a streaming command: you should select the events to delete and use the delete command after the main search. Ciao. Giuseppe
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve,... See more...
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve, and from this curve i get the max and min with stats  to get the upper and bottom from this curves. ...but my search wont work, and i need your help, here is a makeresult sample: | makeresults format=json data="[{\"_time\":\"1729115947\", \"humidity\":70.7},{\"_time\":\"1729115887\", \"humidity\":70.6},{\"_time\":\"1729115827\", \"humidity\":70.5},{\"_time\":\"1729115762\", \"humidity\":30.9},{\"_time\":\"1729115707\", \"humidity\":70.6}]" [ search | timechart eval(round(avg(humidity),1)) AS avg_humidity | stats min(avg_humidity) as min_avg_humidity ] | where humidity < min_avg_humidity ```| delete ```
Hi @PickleRick, Thank you for your suggestions.    After following your suggestions, the configurations are now working correctly for my use case. Here are the changes I made for [route_to_teamid_i... See more...
Hi @PickleRick, Thank you for your suggestions.    After following your suggestions, the configurations are now working correctly for my use case. Here are the changes I made for [route_to_teamid_index] stanza in transforms.conf: 1) For [route_to_teamid_index] - Set FORMAT = $1 - Updated SOURCE_KEY = MetaData:Source Current working configs for my use cases: ----------------------------------------------------------------------------- props ----------------------------------------------------------------------------- #custom-props-for-starflow-logs [source::.../starflow-app-logs...] TRANSFORMS-set_new_sourcetype = new_sourcetype TRANSFORMS-set_route_to_teamid_index = route_to_teamid_index ----------------------------------------------------------------------------- transforms ----------------------------------------------------------------------------- #custom-transforms-for-starflow-logs [new_sourcetype] REGEX = .* DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = .*\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = MetaData:Source FORMAT = $1 DEST_KEY = _MetaData:Index WRITE_META = true Previously, the configuration had SOURCE_KEY = source, which was causing issues. The SOURCE_KEY = <field> setting essentially tells Splunk where the regex should be applied. In my configuration, it was set to "source" but Splunk might not have been able to apply the regex to just the source field. After spending time reading through transforms.conf, I noticed that under the global settings, there was a specific mention of this. SOURCE_KEY = <string> * NOTE: This setting is valid for both index-time and search-time field extractions. * Optional. Defines the KEY that Splunk software applies the REGEX to. * For search time extractions, you can use this setting to extract one or more values from the values of another field. You can use any field that is available at the time of the execution of this field extraction * For index-time extractions use the KEYs described at the bottom of this file. * KEYs are case-sensitive, and should be used exactly as they appear in the KEYs list at the bottom of this file. (For example, you would say SOURCE_KEY = MetaData:Host, *not* SOURCE_KEY = metadata:host .) Keys  MetaData:Source  : The source associated with the event.    Thank you sincerely for all of your genuine help!
Hi @Real_captain , you could try using the area charts and eventually using white for the min area so it seems that it's coloured only the difference between min and max. Ciao. Giuseppe
Hi @tbayer82 , the order of filters isn't relevant, but if you have OR operators I'd prefer to use parenthesis: index=* (dstip="192.168.1.0/24" OR srcip="192.168.1.0/24") action=deny and you don't... See more...
Hi @tbayer82 , the order of filters isn't relevant, but if you have OR operators I'd prefer to use parenthesis: index=* (dstip="192.168.1.0/24" OR srcip="192.168.1.0/24") action=deny and you don't need to use the AND operator that's the default. Ciao. Giuseppe
Hi @Strangertinz , check again the results in the rising column field: usually the issue is there. You have results executing the SQL query in DB-Connect, but it extracts only the records with the ... See more...
Hi @Strangertinz , check again the results in the rising column field: usually the issue is there. You have results executing the SQL query in DB-Connect, but it extracts only the records with the risong column values gretaer than the checkpoint, but if the rising column isn't correct or there are duplicated values you risk to lose records. Ciao. Giuseppe
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. ... See more...
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. first SQL query: select distinct a.item from price a, skus b, deps c,supp_country s where zone_id in (5, 25) and a.item = b.sku and b.dept = c.dept and a.item = s.item and s.primary_supp_ind = 'Y' and s.primary_pack_ind = 'Y' and b.dept in (7106, 1666, 1650, 1651, 1654, 1058, 4158, 4159, 489, 491, 492, 493, 495, 496, 497, 498, 499, 501, 7003, 502, 503, 7004, 450, 451, 464, 465, 455, 457, 458, 459, 460, 461, 467, 494, 7013, 448, 462, 310, 339, 7012, 7096, 200, 303, 304, 1950, 1951, 1952, 1970, 1976, 1201, 1206, 1207, 1273, 1352, 1274, 1969, 1987, 342, 343, 7107, 7098, 7095, 7104, 2101, 2117, 7107, 7098, 1990, 477, 162, 604, 900, 901, 902, 903, 904, 905, 906, 908, 910, 912, 916, 918, 7032, 919, 7110, 7093, 7101, 913, 915, 118, 119, 2701, 917) and b.js_status in ('CO'); Second SQL: WITH RankedData AS (SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated, ROW_NUMBER() OVER (PARTITION BY Product_Id, BusinessUnit_Id ORDER BY LastUpdated DESC) AS RowNum FROM RETAIL.DBO.CAT_PRICE(nolock) WHERE BusinessUnit_Id IN ('zone_5', 'zone_25') AND Product_Id IN ($ITEM$) ) SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated FROM RankedData WHERE RowNum = 1; When I am using map command as shown below, expected results are fetched but only 10k records as per map command limitations. But I want to to fetch all the records(around 30K) Splunk query: | dbxquery query="First SQL query" connection="ABC" |eval comma="'" |eval ITEM='comma' + 'ITEM' + 'comma'+"," |mvcombine ITEM |nomv ITEM |fields - comma |eval ITEM=rtrim(tostring(ITEM),",")| map search="| dbxquery query=\"Second SQL query" connection=\"XYZ\"" But when i am using join command as shown below to get all the results(more than 10K), I am not getting the desired output. The output only contains results from first query. I tried replacing the column name Product_Id in second sql with ITEM at all places, but still no luck. | dbxquery query="First SQL query" connection="ABC" |fields ITEM | join type=outer ITEM[search dbxquery query=\"Second SQL query" connection=\"XYZ\"" Could someone help me in understanding what is going wrong and how can i get all the matching results from second query?
Hi @Erendouille , the only way is to tune the Correlation Search filtering events with "unknown " or "NULL". One hint: don't modify Correlation Searches, clone and modify them in a custom app (call... See more...
Hi @Erendouille , the only way is to tune the Correlation Search filtering events with "unknown " or "NULL". One hint: don't modify Correlation Searches, clone and modify them in a custom app (calld e.g. "SA-SOC"). Ciao. Giuseppe
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to ... See more...
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to generate a PDF from one of my dashboard on the last 24 hours, using a splunk API call. I'm using a POST request to the ".../services/pdfgen/render" endpoint. First I couldn't find any documentation on  this matter. Furthermore, even when looking at $SPLUNK_HOME/lib/python3.7/sites-packages/splunk/pdf/pdfgen_*.py  (endpoint,views,search,utils) i could'nt really understand what arguments to use to ask for the last 24 hours data. I know it should be possible because it is doable on the splunk GUI, where you can choose a time range and render according to it.  I saw something looking like time range args : et and lt, which should be earliest time and latest time, but i don't know what type of time data it is expecting an trying random things didn't get me anywhere. If you know anything on this subject please help me thank you
It's usually nice to actually ask a question after reporting the current state. Typically if the search is properly defined and scheduled but is not being run, the issue is with resources. Are you s... See more...
It's usually nice to actually ask a question after reporting the current state. Typically if the search is properly defined and scheduled but is not being run, the issue is with resources. Are you sure your SH(C) is not overloaded and you have no delayed/skipped searches? Did you check scheduler's logs?
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and ... See more...
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and Min) and we would like to have color filed in the area between Max and Min lines.  Current Query to generate the 3 lines :  | table Start_Time CurrentWeek "CurrentWeek-1" "CurrentWeek-2"  2 more lines ( Max and Min ) needs to be added in the above linechart and fill the color between max and min. 
Up
Well. This is a fairly generic question and to answer it you have to look into your own data. The Endpoint datamodel definition is fairly well known and you can browse through its details any time i... See more...
Well. This is a fairly generic question and to answer it you have to look into your own data. The Endpoint datamodel definition is fairly well known and you can browse through its details any time in the gui. You know which indexes the datamodel pulls the events from. So you must check the data quality in your indexes and check if the sourcetypes have proper extractions and if your sources provide you with relevant data. If there is no data in your events what is Splunk supposed to do? Guess? It's not about repairing a datamodel because the datamodel is just an abstract definition. It's about repairing your data or its parsing rules so that necessary fields are extracted from your events. That's what CIM-compliance means.  If you have a TA for specific technology which tells you it's CIM-compliant, you can expect the fields to be filled properly (and you could fill a bug report if they aren't ;-)). But sometimes TAs require you to configure your source in a specific way because otherwise not all relevant data is being sent in the events. So it all boils down to have data and know your data.
OK. I assume you're talking about a DBConnect app installed on a HF in your on-prem environment, right? If you're getting other logs from that HF (_internal, some other inputs), that means that the ... See more...
OK. I assume you're talking about a DBConnect app installed on a HF in your on-prem environment, right? If you're getting other logs from that HF (_internal, some other inputs), that means that the HF is sending the data. It's the dbconnect input that's not pulling the data properly from the source database. (the dbconnect doesn't "send" anything on its own; it just gets the data from the source and lets Splunk handle it like any other input). So check your _internal for anything related to that input.
1. Most people don't speak Japanese here 2. 7.3 is a relatively old version. Are you sure you meant that one? Not 9.3? 3. Regardless, if you can connect to localhost on port 8000 it seems that y... See more...
1. Most people don't speak Japanese here 2. 7.3 is a relatively old version. Are you sure you meant that one? Not 9.3? 3. Regardless, if you can connect to localhost on port 8000 it seems that your Splunk instance is running. If you cannot connect from remote it means that either the splunkd.exe is listening on loopback interface only (which you can verify with netstat -an -p tcp) or you are unable to reach the server on a network level (which - depending on your network setup - means either filtering connections with windows firewall or problems with routing or filtering on your router).
Yeah i know the problem was quite specific, sorry for the late answer and thanks for your help. I was able to determine what failed, the GET was actually supposed to be a POST request. I don't really... See more...
Yeah i know the problem was quite specific, sorry for the late answer and thanks for your help. I was able to determine what failed, the GET was actually supposed to be a POST request. I don't really know why but one splunk error message said that the GET is outdated for pdfgen. Anyway thanks again.
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND sr... See more...
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND src="192.168.1.0/24" OR dst="192.168.1.0/24" AND action=deny index=* action=deny AND src_ip=192.168.1.0/24 OR dst_ip=192.168.1.0/24 Just found it: index=* dstip="192.168.1.0/24" OR srcip="192.168.1.0/24" action=deny  
Thanks @gcusello for getting back to me! Yes I configured DB connect fully, everything works but the actual data not being sent I tried both batch and rising input types with no luck getting da... See more...
Thanks @gcusello for getting back to me! Yes I configured DB connect fully, everything works but the actual data not being sent I tried both batch and rising input types with no luck getting data sent.  Yes, I ingested a sample log file and it showed up successfully on Splunk Cloud.  Yes, I used the same index i ingested the sample file to.  Please let me know if there are other things I can check to resolve this issue.  Is there any known issue with this splunk DB connect version?
Thanks for your answer @gcusello ! Yes, I'm aware that some of our searches appear multiple times because of the "trigger configuration" but this wasn't really the question, sorry if i misled you. ... See more...
Thanks for your answer @gcusello ! Yes, I'm aware that some of our searches appear multiple times because of the "trigger configuration" but this wasn't really the question, sorry if i misled you. My question was really about why the datas coming from the Endpoint data model are not all filled (for example, 99% of the parent_process_name field are "unknown", 97 % of the process_path fields are "null"), and how can I "repair" the data model so every field has a value, which would mean no more false positives and a less crowded ESS dashboard. But thanks anyway for your reactivity !
Hi @Erendouille , in my experience, every Correlation Search requires a tuning phase to tune the thresholds. In addition, it could be a solution not creating a Notable for each occurrance of a Corr... See more...
Hi @Erendouille , in my experience, every Correlation Search requires a tuning phase to tune the thresholds. In addition, it could be a solution not creating a Notable for each occurrance of a Correlation Search, but use the the Risk Score Action, in this way, you find an issue later but you have very less notables that SOC Analysts must analyze. Ciao. Giuseppe