All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I believe that my Splunk's Python has some issue during initialization. This happens whenever I try to run any of many searches that includes Custom Search Commands. Python scripts in CSCs are... See more...
Hi, I believe that my Splunk's Python has some issue during initialization. This happens whenever I try to run any of many searches that includes Custom Search Commands. Python scripts in CSCs are ok as they run without issue on other computers. Error in 'script': Getinfo probe failed for external search command 'detailmodel' ^ raise CodecRegistryError,\ File "C:\Program Files\Splunk\Python-2.7\Lib\encodings\__init__.py", line 123 Current thread 0x000037c0 (most recent call first): Fatal Python error: initfsencoding: unable to load the file system codec SyntaxError: invalid syntax
Hi, We are trying to integrate snow with Splunk using latest version of Splunk Add-on for ServiceNow . While configuring the snow url and credentials we are getting below error. We are using local... See more...
Hi, We are trying to integrate snow with Splunk using latest version of Splunk Add-on for ServiceNow . While configuring the snow url and credentials we are getting below error. We are using local snow username and it is SSO configured and version is "newyork". I am able to connect to the url from splunk server using /bin/splunk cmd openssl s_client -connect xxx.service-now.com:443. Unable to reach server at https://xxx.service-now.com. Check configurations and network settings
Hello, I am beginning to study this tool because I am interested in using it in a work solution. To store all the data collected I need to install a server in my organization or where exactly all tha... See more...
Hello, I am beginning to study this tool because I am interested in using it in a work solution. To store all the data collected I need to install a server in my organization or where exactly all that amount of data is stored?
Hello, I have some logs with a common field and I'd like to correlate them. here my first event: 26/02/2020 16:34:21|toto|test|600|440|Session End|device=titi|sessionId=3bee772f147|ext=Ex... See more...
Hello, I have some logs with a common field and I'd like to correlate them. here my first event: 26/02/2020 16:34:21|toto|test|600|440|Session End|device=titi|sessionId=3bee772f147|ext=External Here my second one: 26/02/2020 16:34:21|toto|test|600|440|Upload|sessionId=3bee772f147|ext=External|username=mvag So, my question is how can I extract the username when the sessionid is the same for the two events by searching the "Session End" information (first event)? The main idea is to search the "Session End" information and find the username where the sessionId is the same. Thank you in advance. Michael p.s: It would be better if the device manufacturer had added the username in each event but it would be so easy
Hi, We have several tiers with several Business Transactions and several Service Endpoints (in fact almost the traffic used to be detected under Service Endpoints) but the load values are not the ex... See more...
Hi, We have several tiers with several Business Transactions and several Service Endpoints (in fact almost the traffic used to be detected under Service Endpoints) but the load values are not the expected ones. The tier flow map shows the following load: That seems to be the Business Transactions values: But the service endpoints of that tier are executing a lot of calls (a higher value of calls): The Service endpoint load graph displays the expected values: From our point of view the tier load should take into account all the calls under the BTs, I mean, also add all the service endpoints calls, but seems not be the case... We need to confirm how dbAPM calculates the calls values displayed in the Load graph under  the Tier Flow Map view. Which entities are taking into account to obtain that values? Please kindly confirm it. Thanks a lot in advance!
Hi, I have a kvstore with couple of fields. One of the field present is a timestamp field 'StartTime'. I wanted to use the standard time picker in one of my dashboards I would like to display the co... See more...
Hi, I have a kvstore with couple of fields. One of the field present is a timestamp field 'StartTime'. I wanted to use the standard time picker in one of my dashboards I would like to display the content of the store based on the time picker. I came across the following link but it doesn't seem to link the time picker and the field: answers[dot]splunk[dot]com/answers/209693/index[dot]html I have tried something like: StartTime>$form.timePicker.earliest$ AND StartTime<$form.timePicker.latest$ but it wouldn't work. Any help is much appreciated. Regards, Thomas
I do not understand the pricing model for splunk enterprise. If my daily ingested rate is 15GB/day, does that mean the cost for Splunk Enterprise is that rate is ($150 * 15) * x days in the month? If... See more...
I do not understand the pricing model for splunk enterprise. If my daily ingested rate is 15GB/day, does that mean the cost for Splunk Enterprise is that rate is ($150 * 15) * x days in the month? If this is incorrect, can someone please clarify the rate?
I know both Microsoft and Splunk not supporting OS and UF(6.x) for windows 2003.And not compatible to send 6.x UF data to 8.x Indexers. But still is there any way to monitor Windows 2003 servers u... See more...
I know both Microsoft and Splunk not supporting OS and UF(6.x) for windows 2003.And not compatible to send 6.x UF data to 8.x Indexers. But still is there any way to monitor Windows 2003 servers using 8.x Indexers? Any work arounds?
Hi, I'm trying to parse log entries from Oracle Weblogic and no matter how I extract the fields I can't quite get things right. Here is a log entry example : 10.135.188.74 2020-02-26 08:44:5... See more...
Hi, I'm trying to parse log entries from Oracle Weblogic and no matter how I extract the fields I can't quite get things right. Here is a log entry example : 10.135.188.74 2020-02-26 08:44:59 GET /psc/PORTAL/EMPLOYEE/ERP/c/MANAGE_PURCHASE_ORDERS.PURCHASE_ORDER.GBL 200 30091 "https://hostname.com/psp/PORTAL/EMPLOYEE/ERP/c/MANAGE_PURCHASE_ORDERS.SRM_WORKCENTER.GBL" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" The extractions didn't work "out of the box" using access_combined (there was no "file" field) so I had to do new extractions. The problem is things like the referer aren't working properly. Here's how I broke down the log entry with regex : clientip - 10.135.188.74 http_method - GET file and http_request) - /psc/PORTAL/EMPLOYEE/ERP/c/MANAGE_PURCHASE_ORDERS.PURCHASE_ORDER.GBL status - 200 http_referer - https://hostname.com/psp/PORTAL/EMPLOYEE/ERP/c/MANAGE_PURCHASE_ORDERS.SRM_WORKCENTER.GBL user_agent - Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko Firstly I don't expect file and http_request are supposed to be the same thing, but I couldn't make the regex work any other way without running into one of those regex too complex errors. Secondly even though the referer is coming from itself (this application) it gets the external_referer eventtype which then blows up the dashboards with thousands of referers. I also have a bunch of other problems like the Audience page showing lots of "Error in map: did not find value for require attribute" but let's take one issue at a time Thanks for your help on this
Here is my query index="myIndex" AND host="myHost" AND ObjectName="myObjectName" | eval secondsEpoch = GroupDateTime/1000 | eval displayDate=strftime(secondsEpoch,"%m-%d %H:%M") | chart su... See more...
Here is my query index="myIndex" AND host="myHost" AND ObjectName="myObjectName" | eval secondsEpoch = GroupDateTime/1000 | eval displayDate=strftime(secondsEpoch,"%m-%d %H:%M") | chart sum(RecordCount) over CallingClass by displayDate | sort 0 -GroupDateTime GroupDateTime is a time that I am logging to splunk it contains an epoch time in milliseconds. No matter how I sort my data it comes out looking like this. What I want is the latest date on the left column. I have even tried to chart by CallingClass over GroupDateTime and that doesn't work either. I even tried _time. I have tried for several days to get this to work and can't find a solutions. I suspect it is probably something easy. I am new to splunk so some solutions I didn't understand or couldn't get to work.
I have a search "%UC_CALLMANAGER-6-DeviceUnregistered" "DeviceType=90" OR "DeviceType=73" Which correctly matches the below entry and I have an alert so send an email notification. I would like ... See more...
I have a search "%UC_CALLMANAGER-6-DeviceUnregistered" "DeviceType=90" OR "DeviceType=73" Which correctly matches the below entry and I have an alert so send an email notification. I would like to pass a string from the syslog text (DeviceName=JK-Test) in the email message but can’t seem to get it pass. When I expand the syslog in search it says DeviceName is JK-Test. In my email message I have tried: email text: $Result.DeviceName$ test line 1 "$event.DeviceName$" test line 2 $DeviceName$ test line 3 $result.devicename$ test line 4 Is there a format to where I can pass that info? search result: Feb 25 16:17:26 myserver.local Feb 25 2020 22:17:26.527 UTC : %UC_CALLMANAGER-6-DeviceUnregistered: %[DeviceName=JK-Test][IPAddress=1.1.1.1][Protocol=RouteList][DeviceType=90][Description=Test Hunt Group][Reason=8][IPAddrAttributes=0][AppID=Cisco CallManager][ClusterID=StandAloneCluster][NodeID=myserver]: Device unregistered
Hi Chaps, I have a confusion in selecting forwarder version to install. Current Environment: I have 6 HF's(v6.5.1) forwarding the logs to Intermediate forwarders(v7.2.4) then to Indexers(v7.2.4).... See more...
Hi Chaps, I have a confusion in selecting forwarder version to install. Current Environment: I have 6 HF's(v6.5.1) forwarding the logs to Intermediate forwarders(v7.2.4) then to Indexers(v7.2.4). Now I want to upgrade the HF's to latest version, so please suggest me on selecting the forwarder version to upgrade, I'm thinking to go with same version as Intermediate forwarder has(v7.2.4) but also thought why can't I go for v8.x. If you provide the compatibility matrix(not this - https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers ) would be appriciated. Thanks, Pramodh B
I am trying to configure the trial of Splunk Business Flow on Splunk Enterprise (a trial) on Windows. I checked to have respected all requirements following the documentation, and all deployment c... See more...
I am trying to configure the trial of Splunk Business Flow on Splunk Enterprise (a trial) on Windows. I checked to have respected all requirements following the documentation, and all deployment check results in the app are ok. Otherwise, when I try to complete the registration, I receive this error: Error processing registration: register error during encryption/decryption. X-Request-ID: e3f95c26-1112-442a-bec1-7032e4b1b85f. In the log error, I find this: ** [SplunkClient] ERROR requestID:8c79e81a-473e-4b83-a480-09b4a5be9bd3 Error reading file : C:\Program Files\Splunk\etc\apps\splunk-business-flow\local\cloud_public.pem** Indeed there is not that pem file in that directory. I tried from installing Business Flow app from the App section in Splunk and from file both. If someone could help me, I would be really thankful!
I have time stamp like below format 2020-02-17 18:23:04 and i woul like to calculate the differene between two such fields start an end times of an activity. which function i can use to get time d... See more...
I have time stamp like below format 2020-02-17 18:23:04 and i woul like to calculate the differene between two such fields start an end times of an activity. which function i can use to get time difference if the time format is like above?.
Hi All, I want to extract from my Routing_Location field the Integer in-between the parentheses and then use it for a Drilldown link I'll give ab example: Routing_Location USA,Verizon_Cel... See more...
Hi All, I want to extract from my Routing_Location field the Integer in-between the parentheses and then use it for a Drilldown link I'll give ab example: Routing_Location USA,Verizon_Cell (1345) USA,Sprint_Cell(3451) I want to click on the cell where 1345 is and use it in a drilldown link like www.example.com/drilldown.php?route_loc_num=$row.Routing_Location_Num$ I have tried to use rex "\[(?<Routing_Location>[^\]]*)" but It can only be used within the search and not how I used it www.example.com/drilldown.php?route_loc_num="\[(?<$row.Routing_Location$>[^]]*)" Is there anything I can use before adding the link to the drilldown to assign the value in the parentheses to a variable and then use the variable in the link?
Hi, We updated our application to Weblogic server 12.2.1.3 from 12.1.2 and after that Appdyanmics Java agent won't start. Nothing is written in the log files and javaagent is set i weblogic startup... See more...
Hi, We updated our application to Weblogic server 12.2.1.3 from 12.1.2 and after that Appdyanmics Java agent won't start. Nothing is written in the log files and javaagent is set i weblogic startup agreement correclty  "-javaagent:<path>/javaagent.jar". We have updated javaagent to 4.5.19.29348 but still the same issue. Anybody that have been out for the same issue and how did you solve it? Machineagent is working fine so connection to controller is also working. 
Hey, I have a field called externalID with values like the following 1766000000009834 1766000000009835 1766000000009836 and i am looking for a way to remove all the 0's in the middle whe... See more...
Hey, I have a field called externalID with values like the following 1766000000009834 1766000000009835 1766000000009836 and i am looking for a way to remove all the 0's in the middle when i output to a table and then rename the field to something like shortID , so the table output would show the following values 17669834 17669835 17669836 I have tried playing around with functions like eval, ltrim, replace...etc and not getting anywhere. can anyone help me out. UPDATE: Hi, I need help with this problem again. So as previously stated i only want to remove the zero's in the middle , but the options given above seem to remove all 0's. so lets say my external ID is 867182000000921046 i want my table to show 867182921046 but the above options are removing all 0's and giving me 86718292146 any ideas how i can do this.
Hi All, Is it possible to show the exact value of the count in my KPI value ? For Eg- If the current value is 10298 , it should 10298 not 1k. Thanks and Happy Splunking !!
Hi! I've inherited an app which contains custom searches only (this isn't a Splunkbase app, but an "in house" app.) My users want to be able to delete searches, etc from the app, but they can't. I... See more...
Hi! I've inherited an app which contains custom searches only (this isn't a Splunkbase app, but an "in house" app.) My users want to be able to delete searches, etc from the app, but they can't. I want them to be able to both manage the searches in the app without having a new deployment, and also not have a subsequent push of all apps cause searches to come back. To fix this, can I do the following: 1) On the SH Deployer, move the searches from default/savedsearches.conf to local 2) Set app.conf to use Full Deployment 3) Push the deployment 4) Set the app back to local? Looking at this: https://docs.splunk.com/Documentation/Splunk/8.0.2/DistSearch/PropagateSHCconfigurationchanges, I think this will work, but I want to make sure. "Use [full deployment] mode if you have a configuration on the deployer in the app's /local directory, and you want to push it to the members and then delete it from the deployer." - This is saying basically that it wipes out the app, and then pushes the new one, correct? Then, when I'm done, change it back to "local_only". Am I reading that correctly? What I don't want to do is start having searches from the previous version being stored in users folders, etc. Thanks! Stephen
Hi Team, We are using Splunk ITSI and we are planning to improve Splunk performance, we need help in understanding how much overhead each service gives on Splunk server, So to understand this in ... See more...
Hi Team, We are using Splunk ITSI and we are planning to improve Splunk performance, we need help in understanding how much overhead each service gives on Splunk server, So to understand this in a better way please read the scenario below We have one platform say "App" where we have 5 servers and we need to measure performance on the basis of 3 KPI's i.e CPU Utilization, Memory Utilization & Disk Utilization, Now to measure the performance of the overall platform we need to measure the performance of each host. So here we have two service configuration methods. First : 1: We will create base search for the KPI's 2: We will create one service for each host having its host as an entity all 3 KPI's in it. 3: Then create one overall service for the "App" platform having Service dependencies in it where we will mention services created for all 5 hosts. Second : 1: We will create base search for the KPI's 2: We will create one overall service for the "App" platform having its all 5 host as an entity all 3 KPI's in it. So i want to understand here which method will more efficient and give less overhead on splunk.