All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We want to read the database logs from a linux server, and the logs are stored in specific path “</path>/log/” as example . The logs are archived at the end of every day in same directory of rea... See more...
Hi, We want to read the database logs from a linux server, and the logs are stored in specific path “</path>/log/” as example . The logs are archived at the end of every day in same directory of real time log file. The real time info writes into “vertica.log” file, so we don’t want to read logs from the file “vertica.log” How can we reed this archived  files in splunk.    
Splunk 8.0.4 Indexing syslog events from the Symantec Blue Coat ProxySG. Using the app https://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/About to set the correct souretype.  Exa... See more...
Splunk 8.0.4 Indexing syslog events from the Symantec Blue Coat ProxySG. Using the app https://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/About to set the correct souretype.  Example event     2020-06-11 12:01:42 34 172.21.207.129 dkkguest - 123.160.120.251 123.160.120.251 Unavailable - - OBSERVED "News" - 200 TCP_HIT GET text/xml;%20charset=utf-8 http weather.service.msn.com 80 /data.aspx ?wealocations=wc%3aNOXX0001&culture=nb-NO&weadegreetype=C&src=outlook aspx "Mozilla/4.0 (compatible; ms-office; MSOffice 16)" 172.20.170.129 1099 294 - "none" "none" "none" unavailable 4f09400d3d550882-0000000102455180-000000005ee21d26 - -     The time of the event 12:01 is indexed by Splunk as 12:01 as well, but it should be two hours later - 14:01. I would expect Splunk to handle this out of the box, but it won't.  Setting this on the sourcetype for instance would not be a good idea, for what if another applicance comes along with different time settings?  So how do I best approach this?
When I output a csv like Windows Eventlog, using alert action>Email notification action>Attach CSV for an event with line breaks, the line breaks and tabs are converted to a string of "\n" or "\t". ... See more...
When I output a csv like Windows Eventlog, using alert action>Email notification action>Attach CSV for an event with line breaks, the line breaks and tabs are converted to a string of "\n" or "\t". Is there a way to output the line breaks and tabs to csv without converting them into strings?
Hi  I'm trying to find duplicate values of a field by using below query. index = internal source type="*" Space="*" App="*"  | eval App=lower(APP) | dedup Space,APP | stats count by APP | where cou... See more...
Hi  I'm trying to find duplicate values of a field by using below query. index = internal source type="*" Space="*" App="*"  | eval App=lower(APP) | dedup Space,APP | stats count by APP | where count>1  getting result as  APP        count app 1        2 app 2        2   now i want to display both values like  APP      count app1      1 APP1      1 APP2      1 app2       1 I'm not able to find a way to get the results like above. Can someone help on this
Hi, Below is my result after doing,  xyseries Date_Time,APPROVAL_STATUS,ACT_UW_COUNT Date_Time APPROVED BACK TO SALES DECLINED OTHERS 12:46:36 260-199 1-2 18-19 94-0 13:01:35 260-... See more...
Hi, Below is my result after doing,  xyseries Date_Time,APPROVAL_STATUS,ACT_UW_COUNT Date_Time APPROVED BACK TO SALES DECLINED OTHERS 12:46:36 260-199 1-2 18-19 94-0 13:01:35 260-199 1-2 19-20 94-0 13:16:35 260-199 1-2 19-20 94-0 13:31:36 260-199 1-2 19-20 94-0 13:46:36 260-199 1-2 19-20 94-0 14:01:36 260-199 1-2 19-20 94-0 14:16:36 260-199 1-2 19-20 94-0 14:31:36 260-199 1-2 19-20 94-0 14:46:36 260-199 1-2 19-20 94-0 15:01:35 261-199 3-7 19-20 95-0 15:16:36 261-199 3-7 19-20 95-0 15:31:36 261-199 3-7 19-20 95-0 15:46:35 261-199 3-7 19-20 95-0 16:01:36 261-199 3-7 19-20 95-0 16:16:36 261-199 3-7 19-20 95-0 16:31:36 261-199 3-7 19-20 95-0   I want unique records for different approvalstatus w..r.t date_time expected result Date_Time APPROVED BACK TO SALES DECLINED OTHERS 12:46:36 260-199 1-2 18-19 94-0 15:01:35 261-199 3-7 19-20 95-0
Hi Experts, My splunk indexer server are running out if memory, its main reason are  /opt/splunk/var/run/searchpeers /opt/splunk/var/lib/splunk/_introspection /opt/splunk/var/lib/splunk/_internal... See more...
Hi Experts, My splunk indexer server are running out if memory, its main reason are  /opt/splunk/var/run/searchpeers /opt/splunk/var/lib/splunk/_introspection /opt/splunk/var/lib/splunk/_internaldb /opt/splunk/var/lib/splunk/kvstore  Indexer _introspection, _internaldb,  kvstore have default setting, its data are not move in cold and frozen bucket. Please suggest what can I do to create space at my server ?    
Hi There , I can see the internal logs error message as below 2020-06-06 09:00:23,441 ERROR pid=2 tid=MainThread file=base_modinput.py:log_error:307 | HTTP Request error: 500 Server Error: Internal... See more...
Hi There , I can see the internal logs error message as below 2020-06-06 09:00:23,441 ERROR pid=2 tid=MainThread file=base_modinput.py:log_error:307 | HTTP Request error: 500 Server Error: Internal Server Error for url: https: ***********   Can someone guide me in fixing the issue so that the logs resume   Thanks in Advance 
Hi All, I have total of 30 GB of total data to be indexed which after indexing will be 15 GB as per splunk average compressing. I have a total of 4 indexers with 1 TB of disk space. Can you please ... See more...
Hi All, I have total of 30 GB of total data to be indexed which after indexing will be 15 GB as per splunk average compressing. I have a total of 4 indexers with 1 TB of disk space. Can you please let me know the indexes.conf setting on each indexer for a retention of 90 days of searchable data in splunk. Does the below settings work or there can be some improvements that can be made?   I got this from the splunk sizing app. http://splunk-sizing.appspot.com/#ar=0&cdv=1&cr=90&ds=1024&hwr=14&i=4&v=30     indexes.conf # volume definitions   [volume:hotwarm_cold] path = /mnt/fast_disk maxVolumeDataSizeMB = 996148 # index definition (calculation is based on a single index) [index_name] homePath = volume:hotwarm_cold/defaultdb/db coldPath = volume:hotwarm_cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb homePath.maxDataSizeMB = 53760 coldPath.maxDataSizeMB = 345600 maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 8985600 maxDataSize = auto    
We have many dashboards of having different field name but similar query logic. So the heading changes for each dashboard. How to make the heading of the dropdown dynamic by giving token or defining... See more...
We have many dashboards of having different field name but similar query logic. So the heading changes for each dashboard. How to make the heading of the dropdown dynamic by giving token or defining macro?   Thanks in advance.
Hello, I am relatively new to Splunk Enterprise and recently started with the App for Infrastructure to monitor some CentOS 7.4 servers. Via the auto-deployment script through the "Add-Data" tab I t... See more...
Hello, I am relatively new to Splunk Enterprise and recently started with the App for Infrastructure to monitor some CentOS 7.4 servers. Via the auto-deployment script through the "Add-Data" tab I tried to deploy the collection. This failed however, since the Splunk collectd plugin does not seem to recognize the libcurl library which resulted in error code 6, could resolve hostname although a regular curl works (adding a sample metric through HEC).  In the end I got around this by using the old method http_write plugin. So I have now the metrics in, but it does not seem to be working natively with the infrastructure app. When opening the server in the app (it is recognized in the investigate tab), then the metrics are empty in the overview sub-tab. When I click on analyze, it states the following: "You do not have permissions to access objects of user=x". The panels give the following text: "There is no data available for cpu.system. To see data on the chart, select a different time range, edit filters, or check with your administrator about user permissions." This seems clearly like an rights issue, because the cpu.* metrics are actually there. I have however no clue what the Infrastructure app is expecting in terms of rights / users. As far as my knowledge goes, this is all default. I am sending the data to the default em_metrics index from the Infrastructure app with sourcetype collectd_http.  Does anybody have any idea why I get these permission messages and how I can fix this?  Best regards, Mark
Pease check, Thanks root@ubuntu:/opt/machineagent# ./bin/machine-agent Using java executable at /usr/bin/java Using Java Version [11.0.7] for Agent Using Agent Version [Machine Agent v20.5.1-2635... See more...
Pease check, Thanks root@ubuntu:/opt/machineagent# ./bin/machine-agent Using java executable at /usr/bin/java Using Java Version [11.0.7] for Agent Using Agent Version [Machine Agent v20.5.1-2635 GA compatible with 4.4.1.0 Build Date 2020-05-27 21:53:51] ERROR StatusLogger Reconfiguration failed: No configuration found for '8bcc55f' at 'null' in 'null' [INFO] Agent logging directory set to: [/opt/machineagent/logs] Could not start up the machine agent due to: javax/annotation/PreDestroy Please see startup.log in the current working directory for details. root@ubuntu:/opt/machineagent# cat startup.log Thu Jun 11 09:30:03 UTC 2020 java.lang.NoClassDefFoundError: javax/annotation/PreDestroy at com.appdynamics.voltron.AppLifecycleModule.<clinit>(AppLifecycleModule.java:53) at com.appdynamics.voltron.FrameworkBootstrap.start(FrameworkBootstrap.java:157) at com.appdynamics.voltron.FrameworkBootstrap.startAndRun(FrameworkBootstrap.java:120) at com.appdynamics.voltron.FrameworkApplication.start(FrameworkApplication.java:31) at com.appdynamics.agent.sim.Main.startSafe(Main.java:62) at com.appdynamics.agent.sim.bootstrap.Bootstrap.main(Bootstrap.java:45) Caused by: java.lang.ClassNotFoundException: javax.annotation.PreDestroy at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ... 6 more
Hi  I had a search to get the range of colors which shows the availablity significance over time My search is like below Index=xyz | bucket span=1h l  eval ftime =strftime(_time, "%d-%m-%Y %H:%M")... See more...
Hi  I had a search to get the range of colors which shows the availablity significance over time My search is like below Index=xyz | bucket span=1h l  eval ftime =strftime(_time, "%d-%m-%Y %H:%M") | chart values (percent) as requests over country by ftime   My columns varies dynamically as per the time range and span of my bucket Where I need to set color as  Percentage 0 to   50 = red( back ground color) Percentage 50-90= yellow( back ground color) Percentage 90-100= green(back ground color) My results comes as Bg: background color ============================= Country  1-05-20 01:00. 1-05-20 01:00  US.                 99.(red bg)                      80(yellow bg) ================================== For the case if column is constant then it is working fine  <format type='color' field= "name"> But here the column field is dynamic, can some help me how to achieve  this in XML  please      
I'm trying to set up a summary index using the sitimechart command. I read a lot about it, in the docs and in this forum, but couldn't find the solution yet.   My search is as follow: index=_inte... See more...
I'm trying to set up a summary index using the sitimechart command. I read a lot about it, in the docs and in this forum, but couldn't find the solution yet.   My search is as follow: index=_internal service=A level=30 | timechart span=1m avg(durationMS) count Now, this search return a timechart with the duratoin and count in every minute. When running it with a summary index, I get different result index=_internal service=A level=30 | sitimechart span=1m avg(durationMS) count I get all the psrsvd fields, without the actual count and durationMS. It seems I need to calculate it again from psrsvd_ct_durationMS and psrsvd_sm_durationMS, which is not what I want.   The docs says that I should be able to run the same search on the summary index and get the same results. What am I missing?
Hello everyone, I want to link two dashboards on a field. Dashboard A will show up i the value on the field has a specific word in it, and then Dashboard B will show up if there is none, but not at ... See more...
Hello everyone, I want to link two dashboards on a field. Dashboard A will show up i the value on the field has a specific word in it, and then Dashboard B will show up if there is none, but not at the same time.  I believe wildcards does not work on xml. Is this possible using simple xml? Thanks
Hi,  I would like to monitor an IIS-, Application and SQL Server. Do I have to install the SplunkUniversalForwarder on each server with the recommend add-on? Thanks and  regards René
Is there a way to delete a directory in the /bin directory of my app during the upgrade installation of a newer version of my app? I moved the splunklib directory from the /bin directory to the /lib... See more...
Is there a way to delete a directory in the /bin directory of my app during the upgrade installation of a newer version of my app? I moved the splunklib directory from the /bin directory to the /lib directory in my app to be compliant with the appinspect report.  If I install my new app and click the upgrade checkbox the splunklib directory under the /bin does not get removed so now I have the old version of splunklib in /bin and the new version of splunklib in /lib directory. Currently the way I can resolve the issue is to go to the cli and remove the app and install the new app version clean.   Below is the command I used to remove the app: ./splunk remove app [appname] -auth <username>:<password>   The app is located in: $SPLUNK_HOME/etc/apps/<appname>   I would like to avoid the user from having to access the cli if possible.  
Unable to run curl from Splunk search. Working fine from ssh session:   -bash-4.2$ curl -k -u admin:xxxxxxx https://localhost:8089/servicesNS/nobody/SA-ITOA/maintenance_services_interface/maintena... See more...
Unable to run curl from Splunk search. Working fine from ssh session:   -bash-4.2$ curl -k -u admin:xxxxxxx https://localhost:8089/servicesNS/nobody/SA-ITOA/maintenance_services_interface/maintenance_calendar -X POST -H "Content-Type:application/json" -d '{"title":"TEST MW","start_time":"1591814852.152","end_time":"1592814852.364","objects": [{"object_type": "entity", "_key": "e7002b20-6b38-49ec-ad2a-29a0be0d2d65"}]}' {"_key": "5ee1d57e5575d4d6ee5710d1"} But below search throws error:   index=_internal | head 1 | eval header="{\"content-type\":\"application/json\"}" | eval data="{\"title\":\"TEST MW\",\"start_time\":\"1591814852.152\",\"end_time\":\"1592814852.364\",\"objects\":[{\"object_type\":\"entity\",\"_key\":\"e7002b20-6b38-49ec-ad2a-29a0be0d2d65\"}]}" | curl method=post uri=https://localhost:8089/servicesNS/nobody/SA-ITOA/maintenance_services_interface/maintenance_calendar splunkauth=true debug=true headerfield=header datafield=data   Please help.
Hi, I am using Splunk add on for Unix and Linux to get my CPU and iostat data of my hosts. Everything is working fine but only iostat data is not coming. [script://./bin/iostat.sh] interval = 60 ... See more...
Hi, I am using Splunk add on for Unix and Linux to get my CPU and iostat data of my hosts. Everything is working fine but only iostat data is not coming. [script://./bin/iostat.sh] interval = 60 sourcetype = iostat source = iostat                                                                                                                                                                                            index = linux disabled = 0 and I have already installed sysstat package.
Hello, I have a csv file which contains 12 columns and i want to use the values of the columns as arguments in my search.  i thought the best way to achieve it will be with macro that will read the... See more...
Hello, I have a csv file which contains 12 columns and i want to use the values of the columns as arguments in my search.  i thought the best way to achieve it will be with macro that will read the file but im not sure how to do it.  maybe there is another way ? i also tried this query :     [| inputlookup concurrency_rules.csv | fields Used* | transpose | rename "row 1" as eventtype | fields eventtype] | transaction maxpause=2s maxspan=1s maxevents=5 | eval max_time=(duration + _time) | eval min_time=(_time) | rename kafka_uuid as uuids | where eventcount!=5 | table eventtype ,min_time, max_time,tail_id,uuids     it is working but not dynamic as i wanted. the file supposed to have more than 1 row so the rename of row 1 is not good enough and also not all the values  in row 1 are eventtypes. also i have more fields there that i want to use as arguments thanks for the help
Hi All, I'm trying to pass result of one query to other. but not able to achieve this. Can someone help on this. Query1 index  = "internal"  sourcetype= "*" Space  = "*" Microservice="*" | eval M... See more...
Hi All, I'm trying to pass result of one query to other. but not able to achieve this. Can someone help on this. Query1 index  = "internal"  sourcetype= "*" Space  = "*" Microservice="*" | eval Microservice=lower(Microservice) | dedup Space,Microservice| stats count by Microservice | where count>1 query2 index ="internal"  sourcetype= "*" Space  = "*" Microservice="*" Org="*" | table Microservice Org Space result of first query is  Microservice   count app1                     2 app2                     2 app3                     3   so i want to pass this Microservices to second query and get result as  Microservice Org Space    Here i'm trying to find duplicate microservices and get Org and space accordingly.