All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Erendouille , the only way is to tune the Correlation Search filtering events with "unknown " or "NULL". One hint: don't modify Correlation Searches, clone and modify them in a custom app (call... See more...
Hi @Erendouille , the only way is to tune the Correlation Search filtering events with "unknown " or "NULL". One hint: don't modify Correlation Searches, clone and modify them in a custom app (calld e.g. "SA-SOC"). Ciao. Giuseppe
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to ... See more...
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to generate a PDF from one of my dashboard on the last 24 hours, using a splunk API call. I'm using a POST request to the ".../services/pdfgen/render" endpoint. First I couldn't find any documentation on  this matter. Furthermore, even when looking at $SPLUNK_HOME/lib/python3.7/sites-packages/splunk/pdf/pdfgen_*.py  (endpoint,views,search,utils) i could'nt really understand what arguments to use to ask for the last 24 hours data. I know it should be possible because it is doable on the splunk GUI, where you can choose a time range and render according to it.  I saw something looking like time range args : et and lt, which should be earliest time and latest time, but i don't know what type of time data it is expecting an trying random things didn't get me anywhere. If you know anything on this subject please help me thank you
It's usually nice to actually ask a question after reporting the current state. Typically if the search is properly defined and scheduled but is not being run, the issue is with resources. Are you s... See more...
It's usually nice to actually ask a question after reporting the current state. Typically if the search is properly defined and scheduled but is not being run, the issue is with resources. Are you sure your SH(C) is not overloaded and you have no delayed/skipped searches? Did you check scheduler's logs?
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and ... See more...
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and Min) and we would like to have color filed in the area between Max and Min lines.  Current Query to generate the 3 lines :  | table Start_Time CurrentWeek "CurrentWeek-1" "CurrentWeek-2"  2 more lines ( Max and Min ) needs to be added in the above linechart and fill the color between max and min. 
Up
Well. This is a fairly generic question and to answer it you have to look into your own data. The Endpoint datamodel definition is fairly well known and you can browse through its details any time i... See more...
Well. This is a fairly generic question and to answer it you have to look into your own data. The Endpoint datamodel definition is fairly well known and you can browse through its details any time in the gui. You know which indexes the datamodel pulls the events from. So you must check the data quality in your indexes and check if the sourcetypes have proper extractions and if your sources provide you with relevant data. If there is no data in your events what is Splunk supposed to do? Guess? It's not about repairing a datamodel because the datamodel is just an abstract definition. It's about repairing your data or its parsing rules so that necessary fields are extracted from your events. That's what CIM-compliance means.  If you have a TA for specific technology which tells you it's CIM-compliant, you can expect the fields to be filled properly (and you could fill a bug report if they aren't ;-)). But sometimes TAs require you to configure your source in a specific way because otherwise not all relevant data is being sent in the events. So it all boils down to have data and know your data.
OK. I assume you're talking about a DBConnect app installed on a HF in your on-prem environment, right? If you're getting other logs from that HF (_internal, some other inputs), that means that the ... See more...
OK. I assume you're talking about a DBConnect app installed on a HF in your on-prem environment, right? If you're getting other logs from that HF (_internal, some other inputs), that means that the HF is sending the data. It's the dbconnect input that's not pulling the data properly from the source database. (the dbconnect doesn't "send" anything on its own; it just gets the data from the source and lets Splunk handle it like any other input). So check your _internal for anything related to that input.
1. Most people don't speak Japanese here 2. 7.3 is a relatively old version. Are you sure you meant that one? Not 9.3? 3. Regardless, if you can connect to localhost on port 8000 it seems that y... See more...
1. Most people don't speak Japanese here 2. 7.3 is a relatively old version. Are you sure you meant that one? Not 9.3? 3. Regardless, if you can connect to localhost on port 8000 it seems that your Splunk instance is running. If you cannot connect from remote it means that either the splunkd.exe is listening on loopback interface only (which you can verify with netstat -an -p tcp) or you are unable to reach the server on a network level (which - depending on your network setup - means either filtering connections with windows firewall or problems with routing or filtering on your router).
Yeah i know the problem was quite specific, sorry for the late answer and thanks for your help. I was able to determine what failed, the GET was actually supposed to be a POST request. I don't really... See more...
Yeah i know the problem was quite specific, sorry for the late answer and thanks for your help. I was able to determine what failed, the GET was actually supposed to be a POST request. I don't really know why but one splunk error message said that the GET is outdated for pdfgen. Anyway thanks again.
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND sr... See more...
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND src="192.168.1.0/24" OR dst="192.168.1.0/24" AND action=deny index=* action=deny AND src_ip=192.168.1.0/24 OR dst_ip=192.168.1.0/24 Just found it: index=* dstip="192.168.1.0/24" OR srcip="192.168.1.0/24" action=deny  
Thanks @gcusello for getting back to me! Yes I configured DB connect fully, everything works but the actual data not being sent I tried both batch and rising input types with no luck getting da... See more...
Thanks @gcusello for getting back to me! Yes I configured DB connect fully, everything works but the actual data not being sent I tried both batch and rising input types with no luck getting data sent.  Yes, I ingested a sample log file and it showed up successfully on Splunk Cloud.  Yes, I used the same index i ingested the sample file to.  Please let me know if there are other things I can check to resolve this issue.  Is there any known issue with this splunk DB connect version?
Thanks for your answer @gcusello ! Yes, I'm aware that some of our searches appear multiple times because of the "trigger configuration" but this wasn't really the question, sorry if i misled you. ... See more...
Thanks for your answer @gcusello ! Yes, I'm aware that some of our searches appear multiple times because of the "trigger configuration" but this wasn't really the question, sorry if i misled you. My question was really about why the datas coming from the Endpoint data model are not all filled (for example, 99% of the parent_process_name field are "unknown", 97 % of the process_path fields are "null"), and how can I "repair" the data model so every field has a value, which would mean no more false positives and a less crowded ESS dashboard. But thanks anyway for your reactivity !
Hi @Erendouille , in my experience, every Correlation Search requires a tuning phase to tune the thresholds. In addition, it could be a solution not creating a Notable for each occurrance of a Corr... See more...
Hi @Erendouille , in my experience, every Correlation Search requires a tuning phase to tune the thresholds. In addition, it could be a solution not creating a Notable for each occurrance of a Correlation Search, but use the the Risk Score Action, in this way, you find an issue later but you have very less notables that SOC Analysts must analyze. Ciao. Giuseppe
こんにちは。 自宅環境にSplunk Enterprise 7.3のトライアル版をインストールしました。 環境:Windows Server 2019 Essencial (192.168.0.x)    Active Directory インストールしたのは一台だけで、Forwarder等は使用していません。 SplunkWebにアクセスするときに以下のアドレスだと正しく接続できます... See more...
こんにちは。 自宅環境にSplunk Enterprise 7.3のトライアル版をインストールしました。 環境:Windows Server 2019 Essencial (192.168.0.x)    Active Directory インストールしたのは一台だけで、Forwarder等は使用していません。 SplunkWebにアクセスするときに以下のアドレスだと正しく接続できます。 https://localhost:8000 https://127.0.0.1:8000 しかし、本来そのサーバが持っているIPアドレスやホスト名ではアクセスできずタイムアウトしてしまいます。 https://192.168.0.x:8000 https://foo:8000 ほかのクライアントから上記のアドレスでアクセスしてもやはりSplunkの画面は表示されません。   これはなぜでしょうか。 おそらく、何かの設定が足りないためだとは思うのですが。 アドバイスをお願いします。
Hello ! I'm using Splunk_SA_CIM with ESS and I'm currently studying most of the ESCU correlation search for my own purposes. Problem : I discovered that most of my ESCU rules are creating a lot o... See more...
Hello ! I'm using Splunk_SA_CIM with ESS and I'm currently studying most of the ESCU correlation search for my own purposes. Problem : I discovered that most of my ESCU rules are creating a lot of notable events, which after investigation, were all false positives. All these rules are based on fields coming from Endpoint Data Model (for exemple, Processes.process_path), and because most of the process.path values are equal to "null", it triggers the search and create a notable event. I've already updated every app I use, and to gather Windows data, I'm using Splunk_TA_Windows add-on. Do you have any clue on how I can find where the problem is and solve it ?    
Hi @Strangertinz , a stupid question: have you configured the input in DB-Connect or did you only tested the connection? did you checked the ckeckpont based on the rising column? in other words, ar... See more...
Hi @Strangertinz , a stupid question: have you configured the input in DB-Connect or did you only tested the connection? did you checked the ckeckpont based on the rising column? in other words, are you sure that you have values where the rising column value has a greater value than the previous? are you sure that you're receiving logs on Splunk Cloud from the same server where DB-Connect is located? did you checked the index used in the inputs.conf? Ciao. Giuseppe
Hi @Siddharthnegi , in the test, be sure that the time period is the same at the scheduled time. Then, do you know in what app it's located? so you can search it, if you don't know the app, you co... See more...
Hi @Siddharthnegi , in the test, be sure that the time period is the same at the scheduled time. Then, do you know in what app it's located? so you can search it, if you don't know the app, you could search it on SSH in savedsearches.conf files. Ciao. Giuseppe
Hi @Esky73 , some add-ons aren't free and request to sign in an external site. The strange thing is that if you follow the Details there's another link to download this add-on and also the title is... See more...
Hi @Esky73 , some add-ons aren't free and request to sign in an external site. The strange thing is that if you follow the Details there's another link to download this add-on and also the title is a little different, but it's downloadable. Ciao. Giuseppe
Its a report , it is shared at global level, when i ran this search it is giving results.
Hi @Siddharthnegi , is this savedsearch an alert or a report? is this savedsearch shared at least at app level or private? are you sure that the savedsearch has results? please, make a test modif... See more...
Hi @Siddharthnegi , is this savedsearch an alert or a report? is this savedsearch shared at least at app level or private? are you sure that the savedsearch has results? please, make a test modifying the savedsearch assuting that there will be at least one result and see what happens. Ciao. Giuseppe