All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

お世話になります。 標題について質問させてください。 デプロイサーバ(Splunk Enterprize7.3.3 windows64bit)から デプロイクライアント(Universal Forwarder7.3.3 windows64bit)へ Appの配布を行いたいと考えております。 しかし、Splunkの管理コンソールの設定 > フォワーダー管理で クライアント... See more...
お世話になります。 標題について質問させてください。 デプロイサーバ(Splunk Enterprize7.3.3 windows64bit)から デプロイクライアント(Universal Forwarder7.3.3 windows64bit)へ Appの配布を行いたいと考えております。 しかし、Splunkの管理コンソールの設定 > フォワーダー管理で クライアントにUniversal ForwarderをインストールしたPCが 表示されず、作業が進められなくて困っております。 splunk初心者のため何を確認し、何を設定すればよいのかが分かっておりません。 行った作業は以下の通りです。  ①PC1にSplunk Enterprize7.3.3 windows64bitをインストール  ②設定 > 転送と受信 > 受信の設定 > 9997ポートを作成  ③PC2にUniversal Forwarder7.3.3 windows64bitをインストール   Deployment ServerにPC1のIPアドレス:8089を指定   Recieving IndexerにPC1のIPアドレス:9997を指定  ④PC2のoutputs.confの[tcpout:default-autolb-group]に   compressed = trueを追加 確認したこと  ①PC2のPowerShellからTest-NetConnectionコマンドで9997   ポートで接続できることを確認しています。  ②PC2でWindowsのログをPC1に送るよう設定した場合は、   Search&Reportingのデータサマリから見れることを確認   しています。   (この状態で設定 > フォワーダー管理のクライアントに    Universal ForwarderをインストールしたPC2が表示    されません。) 以上、よろしくお願いいたます。
Hi, I am using below simple search where I am using coalesce to test. index=fios 110788439127166000 | eval check=coalesce(SVC_ID,DELPHI_REQUEST.REQUEST.COMMAND) | table DELPHI_REQUEST.REQUE... See more...
Hi, I am using below simple search where I am using coalesce to test. index=fios 110788439127166000 | eval check=coalesce(SVC_ID,DELPHI_REQUEST.REQUEST.COMMAND) | table DELPHI_REQUEST.REQUEST.COMMAND ,host,SVC_ID,check |rename DELPHI_REQUEST.REQUEST.COMMAND as "COMMAND" I am getting below output where coalesce is not printing the value of field DELPHI_REQUEST.REQUEST.COMMAND instead it is printing null value. COMMAND host SVC_ID check ------------------------------------------------------------------------------------------ GET_TOPOLOGY dlfdam1 GET_TOPOLOGY dlfdam1 However, if I use below query coalesce is working fine. index=fios 110788439127166000 | eval check=coalesce(SVC_ID,host) | table DELPHI_REQUEST.REQUEST.COMMAND ,host,SVC_ID,check |rename DELPHI_REQUEST.REQUEST.COMMAND as "COMMAND" COMMAND host SVC_ID check ---------------------------------------------------------------------------------------- GET_TOPOLOGY dlfdam1 dlfdam1 GET_TOPOLOGY dlfdam1 dlfdam1 Can someone let me understand why it is not working with extracted fields and working with host field
link text Hi I have an issue with the field MemoryUsage I have no results in | eval MemoryUsage = round((TotalMemory-FreeMemory) / TotalMemory*100, 2) due to the field FreeMemory which retu... See more...
link text Hi I have an issue with the field MemoryUsage I have no results in | eval MemoryUsage = round((TotalMemory-FreeMemory) / TotalMemory*100, 2) due to the field FreeMemory which returns any results Its strange because the "Value" fields is an positive and integer number and I collect well this field So what is the issue please | fields host Value TotalPhysicalMemory | eval FreeMemory = round(Value, 0) | eval TotalMemory = round((TotalPhysicalMemory / 1024 / 1024), 0) | eval MemoryUsage = round((TotalMemory-FreeMemory) / TotalMemory*100, 2) | stats last(FreeMemory) as "Free Memory", last(TotalMemory) as "Total Memory", values(MemoryUsage) as "Memory Usage" by host | eval Free Memory='Free Memory'." MB", Total Memory='Total Memory'." MB", Memory Usage='Memory Usage'." %"  
I am trying to test AppDyanmics against ELK and seeking some help in getting correct details - Please can someone help. We are running docker-compose which creates 10 containers on 4 workers which a... See more...
I am trying to test AppDyanmics against ELK and seeking some help in getting correct details - Please can someone help. We are running docker-compose which creates 10 containers on 4 workers which are managed by 2 managers. All these containers are having one jvm running in it. How can I use Appdynamics for Docker Swarm (there is a broken link in documentations and only available links are for K8S & OpenShift - I know docker is moving away from Swarm but need to existing env ? How can I define my docker containers to use Node name (where the containers gets created by docker-compose) as container’ hostname - so that we can utilise only one machine agent and monitor all the containers running on that machine/node? Can we just use machine agent to get containers details (running on that machine) which should tell containers statistics and logs details Or do we need to install app agents in the containers- If yes then please could you suggest how to proceed  Thanks in advance
How do we change the color of a bar chart based on x-axis value , we already tried all options from google but still no result, can you please help us.
Hi all, I am using Heavy forwarder (splunk version 8.0.1 and os-windows) to ingest .zip log files but I could see very less cores are getting used which is around 2 to 3 .although Heavy forwarder ... See more...
Hi all, I am using Heavy forwarder (splunk version 8.0.1 and os-windows) to ingest .zip log files but I could see very less cores are getting used which is around 2 to 3 .although Heavy forwarder has 8 cores available . So how can I make setting to use all cores by splunk. Also I have increased parallelingestion pipelines to 2. So is there any setting by which I can increase cores used by splunk? Thanks,
i am trying to query the Oracle DB using the statement attached in the case, the query works fine for the batch input, but when i try to put rising column and check point value it throws the error as... See more...
i am trying to query the Oracle DB using the statement attached in the case, the query works fine for the batch input, but when i try to put rising column and check point value it throws the error as attached in this case. please update on how to proceed in this case.
Hello Is there any documentation about multireport ? I couldn't find any. thanks
Hi, I am able to post compressed data to Splunk using gzip and curl to Http Even Collector. curl -v -k -H "Content-Encoding: gzip" -H "Authorization: Splunk token" --data-binary @data.json.gz ur... See more...
Hi, I am able to post compressed data to Splunk using gzip and curl to Http Even Collector. curl -v -k -H "Content-Encoding: gzip" -H "Authorization: Splunk token" --data-binary @data.json.gz url Does the java logging library described in https://dev.splunk.com/enterprise/docs/java/logging-java/ support gzip compression. publishing raw data will consume significant amount of network. Thanks!
Hi All, I am not able to find any solution of how to convert any Splunk SPL Query to Sigma File. I want to write a script to do the process. Objective : SPL Query will be entered as input a... See more...
Hi All, I am not able to find any solution of how to convert any Splunk SPL Query to Sigma File. I want to write a script to do the process. Objective : SPL Query will be entered as input and python script should convert the SPL Query to SIgma File. Please help me if there is any existing packages or any other solution.
Hi, How can I extract 2 values from fieldA in a lookup and ignore the rest then count as total
Hi, I'm trying to create a alert action to create a incident when any alert gets triggered. Whats the best way to achieve it,Please suggest.
Hello everyone, I would like to get some help. I have a LDAP in my organization, containing data of users, their authorizations, date of change etc.. I have exported a static list containing t... See more...
Hello everyone, I would like to get some help. I have a LDAP in my organization, containing data of users, their authorizations, date of change etc.. I have exported a static list containing the data, and I export an updated list every once in a while. I can't index he data again, since it will cause duplicates. I made a lookup table and a lookup definition, but when I exported a new list and changed it from the folders in splunk, my searches didn't work anymore. There was a warning saying it could not find the file and the SID ( even though I named it the same as the old file). It worked again only when I changed the file in the lookup table. What can I do so when I update the list in the splunk folders I won't need to change the file in the lookup table? Is there a way to make it automatic? I thought of making an automatic lookup, but the whole data I need is inside the lookup table so I have nothing to write in the 'lookup input fields'. Thank you, Sabina.
server after restart splunk services few days later still happen not phone home between server to splunk Enterprise. How to get splunkd.log at windows server 2008 until 2016
Hi, I have given a query to return me a list of details as below , however the results for all of 30 days are not populating . Instead its giving only the results for last 3 days.. "http://pink... See more...
Hi, I have given a query to return me a list of details as below , however the results for all of 30 days are not populating . Instead its giving only the results for last 3 days.. "http://pinky/createcustomer" NOT "http:/pinky/confirmcustomer" | join type=left vsid [ search "http:/pinky/searchcustomer" ] | eval time=strftime(_time,"%a %B %d %Y %H:%M:%S.%N")| stats count(vsid) as TempcustomerCount list(email) as Email list(firstname) as FirstName list(lastname) as LastName list(JSESSIONID) as JSessionID list(time) as Time by customerCode,previewCode,vsid | where TempcustomerCount>=5
Looking to collect activities performed by user in unix servers. Currently able to identify login activity. Also, tracking activities based on below apps, systemd-logind,chage,serevu,sesu,sesu,... See more...
Looking to collect activities performed by user in unix servers. Currently able to identify login activity. Also, tracking activities based on below apps, systemd-logind,chage,serevu,sesu,sesu,sftp-server,su,sudo Like wise, would like to know other possible activity tracking and commands executed in unix servers. Appreciate any help to achieve this.
Hi  I had requested a trial license.  Will I be able to create an "Alerting extensions" in a  SAAS AppD (Trail license)? Regards subramanyam
I'm trying to create a timechart showing the count of events over 6 months. The query is index=itemdb `macrotest` (name != "*itemA" AND name != "*itemB") | eval category = case(...) | eval field... See more...
I'm trying to create a timechart showing the count of events over 6 months. The query is index=itemdb `macrotest` (name != "*itemA" AND name != "*itemB") | eval category = case(...) | eval fields = split(name,"_") | eval mname = mvindex(fields,1) | search category = "promo" | dedup f_1 f_2 | timechart count by id span=1mon The goal is to dedup within that month only, not across all 6 months. For example, if the same values of f_1,f_2 appear in all 6 months, I should get 1 count of f_1,f_2 in each of the 6 months, not only in the last month. However, it seems like the f_1,f_2 values will be dedup across all 6 months, and appear only in the last month. Can I bin events by the months they appear in, then dedup within that month only to achieve this? Or is there another way?
Hello, I want to trigger a a Python script as reaction to an alert. I have added the stanza to alert_actions.conf and restarted Splunk: [myscript] is_custom = 1 disabled = 0 label = myscr... See more...
Hello, I want to trigger a a Python script as reaction to an alert. I have added the stanza to alert_actions.conf and restarted Splunk: [myscript] is_custom = 1 disabled = 0 label = myscript description = myscript track_alert = 1 ttl = 600 maxtime = 5m icon_path = alert_manager_icon.png payload_format = xml filename = myscript.py alert.execute.cmd = /opt/splunk/etc/apps/bla/bin/myscript.py In the spunkd log I find the following entry: 02-19-2020 13:25:39.278 +0100 ERROR ModularUtility - Specified filename "..." not found in search path. ... But the script is definitely there. Kindly help me to find out what I´m missing. Best Regards Falko
hi With the xml below, i display a complex bar chart that you can see in the screenshot I would like to modify 3 things : 1 - I need to delete "Number" under the X axis 2 - Under each bar... See more...
hi With the xml below, i display a complex bar chart that you can see in the screenshot I would like to modify 3 things : 1 - I need to delete "Number" under the X axis 2 - Under each bar of the chart I would like to have the scale that is actually in the legend Could you help me please? 3- I need to do an average on process_cpu_used_percent field Something like : | eval cpu_range=case(avg(process_cpu_used_percent>0 AND process_cpu_used_percent <=20,"0-20", <row> <panel> <title>CPU overall usage</title> <chart> <search> <query> `CPU` | fields process_cpu_used_percent host | eval host=upper(host) | eval cpu_range=case(process_cpu_used_percent>0 AND process_cpu_used_percent <=20,"0-20", process_cpu_used_percent>20 AND process_cpu_used_percent <=40,"20-40", process_cpu_used_percent>40 AND process_cpu_used_percent <=60,"40-60", process_cpu_used_percent>60 AND process_cpu_used_percent <=80,"60-80", process_cpu_used_percent>80 AND process_cpu_used_percent <=100,"80-100") | chart dc(host) as "Number" by cpu_range | append [| makeresults | fields - _time | eval cpu_range="0-20,20-40,40-60,60-80,80-100" | makemv cpu_range delim="," | mvexpand cpu_range | eval "Number"=0] | dedup cpu_range | sort cpu_range | transpose header_field=cpu_range | search column!="_*" | rename column as cpu_range</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.text">CPU Usage (%)</option> <option name="charting.axisTitleY.text">Number of hosts</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.maximumNumber">1000</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"0-20":0x49B849,"20-40":0x006EAA,"40-60":0xE0AC16,"60-80":0xDA742E,"80-100":0xC84535}</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row>