All Topics

Top

All Topics

I have an issue when I try to convert my date time format to y/m/d/h/m it fails to do so  I currently have my date time format example as 1629752225700 please can anyone help out 
Hi, I followed the official instructions and deployed syslog connect in Ubuntu using docker. https://splunk.github.io/splunk-connect-for-syslog/1266/gettingstarted/docker-systemd-general/ after I ... See more...
Hi, I followed the official instructions and deployed syslog connect in Ubuntu using docker. https://splunk.github.io/splunk-connect-for-syslog/1266/gettingstarted/docker-systemd-general/ after I run    sudo systemctl start sc4s   and, according to the documentation, I see the correct event appearing in Splunk. so the connectivity to splunk is correct. then I wanted to start logging from our Citrix NetScaler appliance. I followed the instructions from https://splunk.github.io/splunk-connect-for-syslog/1266/sources/Citrix/ and created the index netfw in Splunk. I modified the file splunk_metadata.csv with the two lines   citrix_netscaler,index,netfw citrix_netscaler,sourcetype,citrix:netscaler:syslog   I was not sure if it was needed but I restarted the container to read the new configuration file.   systemctl restart sc4s   But I checked in Splunk and no data is coming in that index (0 events) I checked the local FW in the Ubuntu and it is disabled. In fact I run tcpdump and I see syslog packages coming from the netscaler appliance.   :/opt/sc4s/local/context$ sudo tcpdump -nnSX port 514 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 17:11:29.533661 IP 192.168.115.159.54092 > 192.168.105.202.514: SYSLOG local5.info, length: 270 0x0000: 4500 012a bdc7 0000 ff11 9e40 c0a8 739f E..*.......@..s. 0x0010: c0a8 69ca d34c 0202 0116 9c65 3c31 3734 ..i..L.....e<174 0x0020: 3e20 3232 2f30 392f 3230 3231 3a31 353a >.22/09/2021:15: 0x0030: 3131 3a32 3920 474d 5420 4154 4c48 514d 11:29.GMT.ATLHQM 0x0040: 564d 534c 4232 2030 2d50 5045 2d30 203a VMSLB2.0-PPE-0.: 0x0050: 2064 6566 6175 6c74 2054 4350 2043 4f4e .default.TCP.CON 0x0060: 4e5f 5445 524d 494e 4154 4520 3435 3635 N_TERMINATE.4565 0x0070: 3733 3920 3020 3a20 2053 6f75 7263 6520 739.0.:..Source. 0x0080: 3139 322e 3136 382e 3130 352e 3535 3a34 192.168.105.55:4 0x0090: 3433 202d 2044 6573 7469 6e61 7469 6f6e 43.-.Destination 0x00a0: 2031 3932 2e31 3638 2e31 3035 2e31 3630 .192.168.105.160 0x00b0: 3a34 3238 3737 202d 2053 7461 7274 2054 :42877.-.Start.T 0x00c0: 696d 6520 3232 2f30 392f 3230 3231 3a31 ime.22/09/2021:1 0x00d0: 353a 3131 3a32 3920 474d 5420 2d20 456e 5:11:29.GMT.-.En 0x00e0: 6420 5469 6d65 2032 322f 3039 2f32 3032 d.Time.22/09/202 0x00f0: 313a 3135 3a31 313a 3239 2047 4d54 202d 1:15:11:29.GMT.- 0x0100: 2054 6f74 616c 5f62 7974 6573 5f73 656e .Total_bytes_sen 0x0110: 6420 3020 2d20 546f 7461 6c5f 6279 7465 d.0.-.Total_byte 0x0120: 735f 7265 6376 2031 200a s_recv.1.. 17:11:29.689892 IP 192.168.115.159.54092 > 192.168.105.202.514: SYSLOG local5.info, length: 271 0x0000: 4500 012b bdc8 0000 ff11 9e3e c0a8 739f E..+.......>..s. 0x0010: c0a8 69ca d34c 0202 0117 6078 3c31 3734 ..i..L....`x<174 0x0020: 3e20 3232 2f30 392f 3230 3231 3a31 353a >.22/09/2021:15: 0x0030: 3131 3a32 3920 474d 5420 4154 4c48 514d 11:29.GMT.ATLHQM 0x0040: 564d 534c 4232 2030 2d50 5045 2d30 203a VMSLB2.0-PPE-0.: 0x0050: 2064 6566 6175 6c74 2054 4350 2043 4f4e .default.TCP.CON     could someone point to me what I am missing? Thanks a lot in advance.  
Hello, I'm part of the Wazuh's development team and we have noticed that our app for Splunk is tagged as 'Unsupported'. We would like to know how to improve this situation as the App is currently be... See more...
Hello, I'm part of the Wazuh's development team and we have noticed that our app for Splunk is tagged as 'Unsupported'. We would like to know how to improve this situation as the App is currently being updated and mantained regularly. So far, I could not find on Splunk's documentation clear requisites that the App must meet to be tagged as 'developer-supported'. Any help would be widely welcome. Thanks in advance
Hi everyone, I am currently facing an issue so I'm coming here to ask for your help. My issue is basic : I get the data from differents JOBS. Here is an example of the data i'm getting : Name="JOB... See more...
Hi everyone, I am currently facing an issue so I'm coming here to ask for your help. My issue is basic : I get the data from differents JOBS. Here is an example of the data i'm getting : Name="JOB1"  StartTime="2021-09-16 05:10:45+02" EndTime="2021-09-16 06:10:45+02" Name="JOB2"  StartTime="2021-09-16 06:08:45+02" EndTime="2021-09-16 09:10:45+02" As you can see, JOB1 start sooner than JOB2 but, JOB2 finishes later. I would like to have in a table in THE SAME ROAD the difference bewteen StartTime of JOB1 and EndTime of JOB2, in this case, the result should be then around 4H or 280 minutes (format Hour or minutes doesn't matter)   I can't figure out how to it, so thank you all in advance
Hi I want to set up a report on Splunk server to detect when a user is added to a security group Can you please  help what steps I have to take  Thanks
Hello, I am trying to get Windows DHCP logs to Splunk and trying to use below way to get the same, but wanted to look if there is any better way to ingest the DHCP logs to Splunk. Using Deployment ... See more...
Hello, I am trying to get Windows DHCP logs to Splunk and trying to use below way to get the same, but wanted to look if there is any better way to ingest the DHCP logs to Splunk. Using Deployment server to get the logs with inputs.conf file, [monitor://C:\Windows\System32\dhcp] sourcetype = dhcp crcSalt = <SOURCE> alwaysOpenFile = 1 disabled = false index = dhcplogs whitelist = Dhcp.+\.log   And then to install below app at search heads to parse the logs,   https://splunkbase.splunk.com/app/4359/#/details   I haven't completed the setup prior to that was getting some advise if this the best way to go ahead or any other way we have to ingest it better.   If this the best way is there anything i need to be aware prior to the setup, Thanks in advanced.     Regards, Pratik Pashte      
Hi,  I create some field extraction in the past and remove them, but still on specific index when I use this spl show them and detect them in my log. index="my-index" | table duration id  it will ... See more...
Hi,  I create some field extraction in the past and remove them, but still on specific index when I use this spl show them and detect them in my log. index="my-index" | table duration id  it will detect duration and id! while I remove those field extractetion. FYI: not show on left side of search result those field, and when i use field extraction wizard in exist field does not exist anything!   Any idea? thanks
Hello Splunkers!!   How to check the version of all the add-ons we  are using on heavy forwarders. Like DB connect, solarwinds and so on by using any rest command in splunk.   Thanks in Advance. ... See more...
Hello Splunkers!!   How to check the version of all the add-ons we  are using on heavy forwarders. Like DB connect, solarwinds and so on by using any rest command in splunk.   Thanks in Advance.    
I've read a bunch of questions and answers on this topic but I am not able to implement the column chart meeting these two requirements: For each column there should be a lable directly below the c... See more...
I've read a bunch of questions and answers on this topic but I am not able to implement the column chart meeting these two requirements: For each column there should be a lable directly below the column. Each column should have a predefined color. This is a mock-up of how it should look like: These were my two attempts: First attempt (Coloring the columns did not work, but labels below the columns worked): search:     index=itdmfc-p source="jira_history://IT-DemMgmt-FC" issue_type="Sub-task" | rename components{} AS components | where components LIKE "Cloud" | dedup key | eval assigned = if(status=="Assigned", "Assigned", null) | eval inProgress = if(status=="In Progress", "In Progress", null) | eval waiting = if(status=="Waiting for customer", "Waiting for customer", null) | eval accepted = if(status=="Accepted", "Accepted", null) | eval declined = if(status=="Finally Declined", "Finally Declined", null) | eval obsolete = if(status=="Obsolete", "Obsolete", null) | eval total = "1" | chart count(total) as Total, count(assigned) as Assigned, count(inProgress) as "In Progress", count(waiting) as "Waiting for customer", count(accepted) as Accepted, count(declined) as Declined, count(obsolete) as Obsolete | transpose 0 column_name="Status"​   Result: (impossible to color the single columns separately) I've tried coloring the columns with <option name="charting.seriesColors">[ffffff, F5B041, F7DC6F, D5DBDB, 3DB42A]</option> and also with charting.fieldColors but it didn't bring the expected result.   Second attempt (Coloring the columns worked, but labels below columns not): search:    index=itdmfc-p source="jira_history://IT-DM-FC" issue_type="Sub-task" | rename components{} AS components | where components LIKE "Cloud" | dedup key | eval assigned = if(status=="Assigned", "Assigned", null) | eval inProgress = if(status=="In Progress", "In Progress", null) | eval waiting = if(status=="Waiting for customer", "Waiting for customer", null) | eval accepted = if(status=="Accepted", "Accepted", null) | eval declined = if(status=="Finally Declined", "Finally Declined", null) | eval obsolete = if(status=="Obsolete", "Obsolete", null) | eval total = "1" | chart count(total) as Total, count(assigned) as Assigned, count(inProgress) as "In Progress", count(waiting) as "Waiting for customer", count(accepted) as Accepted, count(declined) as Declined, count(obsolete) as Obsolete​   Result: This second attempt allowed me to color the columns, but I wasn't able to add the labels directly below each single column. Is there a way to combine both requirements: Having labels directly below the columns as in my first attempt and coloring each column separately as in my second attempt?  
Hello I'm trying to set an alert which will fired only after the second time the threshold is reached. i set Throttle with Suppress triggering of 5 min but the alert is fired after the first time ... See more...
Hello I'm trying to set an alert which will fired only after the second time the threshold is reached. i set Throttle with Suppress triggering of 5 min but the alert is fired after the first time what should i do ?  
I have a lookup table that i have uploaded to Splunk. I added a lookup definition for it, and the permissions on both the table and the definition are global (read all and shared among all apps). Bot... See more...
I have a lookup table that i have uploaded to Splunk. I added a lookup definition for it, and the permissions on both the table and the definition are global (read all and shared among all apps). Both table and definition are stored in the search app context, but that shouldn't matter when they are shared among all apps, right? However when i go to add a lookup field to a dataset to enrich the data stored in said dataset, the drop down from which you select the lookup to use doesn't have the aforementioned custom lookup in it. In fact the drop down list only extends as far as lookups beginning with 'T' then stops. So even though we have the Splunk_TA_Windows apps installed, many of those lookups are not present in the dropdown either, despite having similar global visibility and permissions as my custom lookup. Any one else encountered this? Am i missing something?
I am currently working on the architecture design for our Splunk platform in AWS We have ES and are planning to leverage Smart Store for low cost data retention. I was reading through the pre-req... See more...
I am currently working on the architecture design for our Splunk platform in AWS We have ES and are planning to leverage Smart Store for low cost data retention. I was reading through the pre-reqs of Smart Store. and one of the pre-reqs states, "For SmartStore use with Splunk Enterprise Security, confirm that you have enough local storage available to accommodate 90 days of indexed data, instead of the 30 days otherwise recommended. See Local storage requirements."   Now if our data retention requirement itself is a total 90 days worth of data, out of which we are planning to store 50 days worth of data on local fast storage (to save on cost which is the whole idea behind using SS) but if  local disk for 90 days worth of indexed data is mandatory, is it even worth considering S3 ? Could anyone please help with some advice on this ?
Hello everyone Do splunk write any internal logs that it (splunk) needed to update? I mean when we enter web-interface, there is appropriate message, that new version is available. Is there same me... See more...
Hello everyone Do splunk write any internal logs that it (splunk) needed to update? I mean when we enter web-interface, there is appropriate message, that new version is available. Is there same message in any splunk internal logs?   Thank you.
Im looking to get a query that will tell me the difference in an error rate increase i.e 5 minutes ag it was 120 errors but now is above 10%. My current search is as per the below   index=aws_kuber... See more...
Im looking to get a query that will tell me the difference in an error rate increase i.e 5 minutes ag it was 120 errors but now is above 10%. My current search is as per the below   index=aws_kubernetes app=nio tag=error env=prd* | timechart span=1m count by app limit=0   this will show me the standard error rate over time so need to know when a percentage increase happens
Hello Team,  I have about 10K keywords to search. It is not practical to construct a large query like below  index=dev (key=val1 OR key=val2 OR key=val3.....key=val10000) Is there any other way to... See more...
Hello Team,  I have about 10K keywords to search. It is not practical to construct a large query like below  index=dev (key=val1 OR key=val2 OR key=val3.....key=val10000) Is there any other way to search? Thanks Phaniraj
Hi Splunk Support Team. I am using Splunk trial version for training/learning purpose which was activated on 2nd Sept only. I am getting the error  "Hmmm… can't reach this page"   Today i am not ... See more...
Hi Splunk Support Team. I am using Splunk trial version for training/learning purpose which was activated on 2nd Sept only. I am getting the error  "Hmmm… can't reach this page"   Today i am not able to login to Splunk and is getting the above error on all browsers. Requesting you to look into this and advise.
Hi, I want to send custom user data from components dynamically. How can I initialize appdynamics in a component instead of index.html?
Hello, Is there an option to set an alert that will raise only after the search reached the threshold twice ? thanks
Hi Team, I would need someone's help to accomplish the requirement. I have less knowledge on css. so seeking your help. I have about 7 dashboards created, I would want one dashboard to be created h... See more...
Hi Team, I would need someone's help to accomplish the requirement. I have less knowledge on css. so seeking your help. I have about 7 dashboards created, I would want one dashboard to be created having those dashboard names in the tiles view with the drilldown enabled for the respective dashboards. Here tile view has to be created to the input values. Please find the image attached the way i wanted the dashboard to be shown. Here in the image values shown are item1 item2, .. Note: i need the css and html to be written in the dashboard instead having them in css file.
Here is a log example -  {"log_time":"2021-08-27T07:16:46.178275260+00:00","output":"stdout","log":"2021-08-27 07:16:46.178 [INFO ] [her-49] a.a.ActorSystemImpl - Logged Request:HttpMethod(POST... See more...
Here is a log example -  {"log_time":"2021-08-27T07:16:46.178275260+00:00","output":"stdout","log":"2021-08-27 07:16:46.178 [INFO ] [her-49] a.a.ActorSystemImpl - Logged Request:HttpMethod(POST):http://id-test.api-gateway.sit.ls.api.com/repos/hrn:idmrepo-sit::ol:idm_team_internal_test/ids/getOrCreate/bulk?streaming=true&dscid=GvaIrM-cb4005f6-a828-4fd7-9f54-6082e2912716:200 OK:4","k8scluster":"borg-dev-1-aws-west-1","namespace":"*","env":"DEV","app_time":"","time":"1630048606.178275260"}   I need to extract the digits after "OK:" (here highlighted in red color) as time in ms.  I am just started using splunk. I am trying this -  rex "([^\:]+$)(?P<duration>.+)" | stats exactperc98(duration) as P98 avg(duration) as AVG by log But this is not working.