All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you need a quick patch then it may be possible to edit the password handler code, but as this app is Splunk supported, you could submit a support ticket for it.
Hi @Skv , as I said, Splunk Forwarders can store logs on the local disks if the connection with the Indexers isn't available until the connection will be available again. It depends on the the avai... See more...
Hi @Skv , as I said, Splunk Forwarders can store logs on the local disks if the connection with the Indexers isn't available until the connection will be available again. It depends on the the availabily di the disk space: e.g. if you know that your systems generate e.g. 1GB/hour, you have to give to your forwarers, for 14 hours, 14 GB of disk so the Forwarder can store all the logs. Then you should analyze if the connection (when available) is sufficient to send all the 14 GB to the indexers, it depends on the network bandwidth and on the time of network availability. Ciao. Giuseppe
@gcusello could you explain i need to view the logs when the connenction is not there in the factory data room local and once connection up it has to send data to splunl cloud how it can  be done
Please share (anonymised) raw events for your two examples (not pretty-print formatted versions) preferably in a code block using the </> button. Please explain what your desired results would look ... See more...
Please share (anonymised) raw events for your two examples (not pretty-print formatted versions) preferably in a code block using the </> button. Please explain what your desired results would look like - for example, in requirement 2, do you want the count of the number of times the response time has been 114 over the period of time of your search? These events look like they might be JSON. Have you already extracted the JSON fields during ingestion or are you working with raw, unparsed data? The more information you can give, the quicker you are likely to receive a useful response.
I figured it out | lookup batch.csv OUTPUT startTime finishTime | eval startTime = max(mvmap(startTime, if(startTime <= _time, startTime, null()))) | eval finishTime = min(mvmap(finishTime, if(finis... See more...
I figured it out | lookup batch.csv OUTPUT startTime finishTime | eval startTime = max(mvmap(startTime, if(startTime <= _time, startTime, null()))) | eval finishTime = min(mvmap(finishTime, if(finishTime >= _time, finishTime, null()))) | lookup batch.csv startTime finishTime OUTPUT batchID
Hi @sanjai  I believe you are on to the right thing here with `collections/shared/TimeZones` If you have a look at /opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/shared/preferences/glob... See more...
Hi @sanjai  I believe you are on to the right thing here with `collections/shared/TimeZones` If you have a look at /opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/shared/preferences/global/GlobalSettingsContainer.jsx it references the same file, and this is the file which renders the user preferences and displays the Time zone dropdown. Have a look at including that file (collections/shared/TimeZones.js) , or maybe copying it in to your app for the purposes of testing. My paths are Linux based by the way, so might need updating for your environment. Good luck! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi Splunkers, I am currently working on a development activity with the Splunk React app and need to get the list of timezones from Splunk into my app. From my research, I found that the list o... See more...
Hi Splunkers, I am currently working on a development activity with the Splunk React app and need to get the list of timezones from Splunk into my app. From my research, I found that the list of timezones is located in a file called TimeZones.js at the following path: C:\Program Files\Splunk\quarantined_files\share\splunk\search_mrsparkle\exposed\js\collections\shared\TimeZones.js Questions: How can I retrieve the full list of timezones from the TimeZones.js file? Is there a way to get the timezones via a REST API? Any other suggestions or thoughts on how to achieve this would be appreciated. Thanks in advance! Sanjai
Hi @YP  The app you are referring to is developed by AvoTrix rather than Cisco/ThousandEyes however you can contact them on support@avotrix.com for support and information about the app. Note: This... See more...
Hi @YP  The app you are referring to is developed by AvoTrix rather than Cisco/ThousandEyes however you can contact them on support@avotrix.com for support and information about the app. Note: This app requires a separate license from AvoTrix which is not included as part of Splunk. For more info see https://dev.avotrix.com/product/cisco-thousandeyes-add-on-for-splunk/ Regarding the APIs it uses, the app uses the following API endpoints: agents_monitors.py 123: url_alert = "https://api.thousandeyes.com/v6/agents.json" 124: url_monitor = "https://api.thousandeyes.com/v6/bgp-monitors.json" alerts_notification.py 170: url_alert = "https://api.thousandeyes.com/v6/alerts.json" reports.py 122: url = "https://api.thousandeyes.com/v6/reports.json" activity_log.py 131: url_activity = "https://api.thousandeyes.com/v6/audit/user-events/search.json" Integrations.py 117: url_alert = "https://api.thousandeyes.com/v6/integrations.json" Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
@ayomotukoya  The error cannot execute binary file: Exec format error usually indicates that the Splunk Universal Forwarder (UF) binary is not compatible with your system's architecture. Check you... See more...
@ayomotukoya  The error cannot execute binary file: Exec format error usually indicates that the Splunk Universal Forwarder (UF) binary is not compatible with your system's architecture. Check your system architecture using: uname -m Ensure that the Splunk UF package matches your architecture. If you are on a 64-bit OS but downloaded a 32-bit binary (or vice-versa), it could cause this issue. Make sure your OS and binary match in architecture. Navigate to the Splunk bin directory and verify the binary type  
@y0u7  If you are satisfied, please consider accepting the solution.
@YP  Check this video for more information https://www.youtube.com/watch?v=decUuAZOa2Q  For the most current information, it's advisable to consult official Cisco resources or contact their support... See more...
@YP  Check this video for more information https://www.youtube.com/watch?v=decUuAZOa2Q  For the most current information, it's advisable to consult official Cisco resources or contact their support teams directly. Because this is Developer supported add-on. 
First off, please post raw text to illustrate data (not Splunk's contracted display).  You already get a fields DeviceProperties{}.Name and DeviceProperties{}.Value.  There are several ways to transf... See more...
First off, please post raw text to illustrate data (not Splunk's contracted display).  You already get a fields DeviceProperties{}.Name and DeviceProperties{}.Value.  There are several ways to transform that into the table format you want, one of which do not require any field extraction.  But let me use extraction - because when dealing with structured data such as JSON, it is important to not attempt to text extraction such as regex. DeviceProperties is an array.  That is why Splunk flattens it into the notation of {}.   The most straightforward method is spath command toward this array, run mvexpand over the array so they become single-valued hash elements, then run spath over these elements: | spath path=DeviceProperties{} | mvexpand DeviceProperties{} | spath input=DeviceProperties{} This will give you a field called Name, and a field called Value.  All you need to do is to transpose it.  So, you add | table Name Value | transpose 0 header_field=Name column_name=_ This, of course is assuming that you only have one event.  If you have multiple events, you must have a unique event key for each row. (When asking a question, these constraints must be clearly stated.  Asking volunteers to read your mind is never a good idea.)  Assuming that unique  key is called "UniqueKey" for each event (it could be a combination of existing fields, just like in SQL), you can use xyseries instead of transpose. | xyseries UniqueKey Name Value  
Hi @livehybrid  Thanks ...Let me try with above solution .Also i want to have how to get the total count for a request made based on date range selected below is my splunk log for  is... See more...
Hi @livehybrid  Thanks ...Let me try with above solution .Also i want to have how to get the total count for a request made based on date range selected below is my splunk log for  is this the correct way should i consider if there is anypath=queryStringParameters ,then count that as a single API request index=* source IN ("") "event" | spath input=_raw output=queryStringParameters path=queryStringParameters | table queryStringParameters | stats count index=* source IN (*) {    event: {       body: null      httpMethod: GET path:/data/v1/name      queryStringParameters: {         identifier: 106      }       requestContext: {         authorizer: {           integrationLatency: 0          principalId: some@example.com        }        domainName: domain        }        domainName: domain      }      resource: /v1/name    }    msg: data:invoke }
Hi there team,   What  API's are currently included in Cisco ThousandEyes Add-on for Splunk?   Is there a plan for adding more API's in future?   YP  
Hi @nithys  If you want to look at count per minute then you should be able to add something like the following to your existing search: | timechart span=1m count Regarding the SLA - Is the SLA ba... See more...
Hi @nithys  If you want to look at count per minute then you should be able to add something like the following to your existing search: | timechart span=1m count Regarding the SLA - Is the SLA based on the responses taking less than a certain time? If so, what is that?  You can do an eval to determine if SLA is met or not | eval SLA_met=IF(responseTime>100,0,1) | timechart span=1 count by SLA_met (1 = Is met, 0 = is not met). Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Apart from https://community.splunk.com/t5/Splunk-Enterprise/Linear-memory-growth-with-Splunk-9-4-0-and-above/m-p/712550#M21712 Some splunk instances(UF/HF/SH/IDX) might see higher memory usage afte... See more...
Apart from https://community.splunk.com/t5/Splunk-Enterprise/Linear-memory-growth-with-Splunk-9-4-0-and-above/m-p/712550#M21712 Some splunk instances(UF/HF/SH/IDX) might see higher memory usage after the upgrade. 9.4.0 has introduced new active channel cache. It has a cache TTL of 3600 sec. active_eligibility_age = <integer> * The time, in seconds, after which splunkd removes an idle input channel from the active channel cache to free up memory. * Default: 3600 Before 9.4.0, splunkd was using inactive channel cache. It had a cache ttl of 330 sec. It's not used anymore. inactive_eligibility_age_seconds = <integer> * Time, in seconds, after which an inactive input channel will be removed from the cache to free up memory. * Default: 330 Because of high active channel cache TTL, splunkd memory footprint might be higher on some splunk deployments. In limits.conf reduce active channel cache TTL to 330 ( 9.4.2 onwards, by default it's 330)     [input_channels] active_eligibility_age = 330        
Hi @ayomotukoya  Please can you confirm the filename of the package you downloaded, and the OS & Architecture that you are trying to deploy to? It sounds like you might be trying to run the wrong ve... See more...
Hi @ayomotukoya  Please can you confirm the filename of the package you downloaded, and the OS & Architecture that you are trying to deploy to? It sounds like you might be trying to run the wrong version - e.g. trying to run PPCLE/ARM or s390x on 64-bit Linux system. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi Team I want to have a dashboard that show API stats 1.Nof request--how to get the total count for a request made based on date range selected below is my splunk log for  index=* source IN (*... See more...
Hi Team I want to have a dashboard that show API stats 1.Nof request--how to get the total count for a request made based on date range selected below is my splunk log for  index=* source IN (*) {    event: { [-]      body: null      httpMethod: GET path:/data/v1/name      queryStringParameters: {         identifier: 106      }       requestContext: {         authorizer: {           integrationLatency: 0          principalId: some@example.com        }        domainName: domain        }        domainName: domain      }      resource: /v1/name    }    msg: data:invoke } 2.Response Time-how to get the total count for a response time  based on date range selected below is the splunk log format {     client: Ksame@example.com    domain: domain    entity: name    msg: responseTime    queryParams: {       identifier: 666    }    requestType: GET    responseTime: 114 } i have only above two logs in splunk how do i get below stats count 3.Request per min(Count of requests processed by an API service per minute.) 4.Passed SLA% (Percentage of service requests that passed service level agreement parameters, including response time and uptime.)
When I try to run "./splunk start" it says "cannot execute binary file: Exec format error". Im in the bin directory running as the root user, tried as the splunk fwd user also tried "splunk start" in... See more...
When I try to run "./splunk start" it says "cannot execute binary file: Exec format error". Im in the bin directory running as the root user, tried as the splunk fwd user also tried "splunk start" in the bin directory but having the same issue. Anyone know how to resolve this?
Hello, is it solved? Did you check splunkd.log for warnings/errors?