All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish... See more...
I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish a connection.
pls whats the better way to create a search query for identifying knowledge object from inactive users and cleaning it up.
I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish... See more...
I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish a connection. Step 1: I downloaded the Express Cart Node.js application from GitHub. Step 2: After navigating to my Node.js project directory in the terminal, I executed the following commands: npm install npm fund npm audit npm audit fix npm app.js Step 3: I configured the agent within the app.js file, providing the necessary details such as access key, account name, tier name, and node name. Despite ensuring that all configurations are accurate, the agent still seems unable to establish a connection. Any insights on what might be causing this connectivity issue would be greatly appreciated!
we are looking to confirm with the "JAMF Integrations" that this app supports the Jamf Pro API vs Classic API and that it was configured to use the API Roles and Clients with the Access Token, Client... See more...
we are looking to confirm with the "JAMF Integrations" that this app supports the Jamf Pro API vs Classic API and that it was configured to use the API Roles and Clients with the Access Token, Client ID and Client Secret vs Basic Auth
Thanks for the response. we do not want downtime, please find the below steps on  Old Splunk indexers All the data is ingesting(storage path) in the default location  (/opt/splunk/var/lib/splunk)... See more...
Thanks for the response. we do not want downtime, please find the below steps on  Old Splunk indexers All the data is ingesting(storage path) in the default location  (/opt/splunk/var/lib/splunk) Has CM New Splunk Servers 1. Prepare 3 New indexers and a New CM 2. On New Indexers Storage path for Hot & warn data is /splunk_hot and /splunk_cold Plan for Migration from old to New (without down-time) Build a New Cluster Master Build 3 New Indexers with storage paths as (/splunk_hot and /Splunk_cold) Create the symbolic link on the old Indexers with the same Name New indexers storage path ((/splunk_hot and /Splunk_cold) Example : ln -s /opt/splunk/var/lib/splunk/…..    /splunk_hot (I am not sure here) Change the path in config in indexes.conf on old Cluster Master [volume_primary] #Path = /opt/splunk/var/lib/splunk  (this is old path and it is committed) Path = /splunk_hot [volume_cold] #Path = /opt/splunk/var/lib/splunk  (this is old path and it is committed) Path = /splunk_cold Push the bundle from the old CM. Join the New indexer server to the old CM. (This will sync the data) Wait till all the data is sync Move the Old CM config to New Cluster Master Shutdown the old CM Last step make the old indexers offline enforce count. I am Struck here I want to create a symbolic link on old indexers servers, how could I create and point the hot data to move in /splunk_hot  and colddb  to /splunk_cold I can see in the old indexers they are lots on index available (like windows,Linux,security,waf,firewall)
I am getting below error. Error in 'EvalCommand': Type checking failed. The '==' operator received different types.
@gcusello I dont see any difference, its not extracting anything
Hi @LizAndy123 , et me understand: you want to extract the user fields (that's located at the beginning of the event) and the resource to access (that's located at the end of the event). In  this ... See more...
Hi @LizAndy123 , et me understand: you want to extract the user fields (that's located at the beginning of the event) and the resource to access (that's located at the end of the event). In  this case you have to use two regexes: | rex "^(?<user>[^ ]+)" | rex "(?<resource>\w+)$" Ciao. Giuseppe
Hi @karthi2809, you can use % only using the like function otherwise you have to use *: | rename bucketFolder as BucketFolder | eval InterfaceName=case( BucketFolder=searchmatch("inbound") A... See more...
Hi @karthi2809, you can use % only using the like function otherwise you have to use *: | rename bucketFolder as BucketFolder | eval InterfaceName=case( BucketFolder=searchmatch("inbound") AND searchmatch("epm"), "EPM", BucketFolder=searchmatch("inbound") AND searchmatch("KPIs"), "APEX_File_Upload", BucketFolder=searchmatch("inbound") AND searchmatch("concur"), "ConcurFile_Upload", true(), "Unknown") | stats values(InterfaceName) AS InterfaceName min(timestamp) AS Timestamp values(BucketFolder) AS BucketFolder values(Status) AS Status BY correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId Ciao. Giuseppe
I have an Event where I can extract the 2 different ID's but how do I show that id 1 gave access to id 2? Sample event  User-ABCDEFG assigned Role-'NewRole' on Project-1234 to ABCDEFG I need to sa... See more...
I have an Event where I can extract the 2 different ID's but how do I show that id 1 gave access to id 2? Sample event  User-ABCDEFG assigned Role-'NewRole' on Project-1234 to ABCDEFG I need to say the User-ABCDEFG gave access to ABCDEFG in a stats sort of way - the user may give 4 or 5 accesses a day so I would then create a report which shows that that user did.
Hi All, I am using case statement to map values instead of other values. But i am not getting the values.I am getting UNknown values. BucketFolder values is like: inbound/concur   |rename bucke... See more...
Hi All, I am using case statement to map values instead of other values. But i am not getting the values.I am getting UNknown values. BucketFolder values is like: inbound/concur   |rename bucketFolder as BucketFolder| eval InterfaceName=case(BucketFolder="%inbound%epm%","EPM", BucketFolder="%inbound%KPIs%","APEX_File_Upload", BucketFolder="%inbound%concur%","ConcurFile_Upload ",true(),"Unknown")| stats values(InterfaceName) as InterfaceName min(timestamp) as Timestamp values(BucketFolder) as BucketFolder values(Status) as Status by correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId  
Hi @kranthimutyala2, what does it happen using only spath: index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | spath | rex field=event "^(?<even... See more...
Hi @kranthimutyala2, what does it happen using only spath: index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | spath | rex field=event "^(?<event_type>\w+)" | where event_type="INFO" Ciao. giuseppe
From the error "Error occurred reading enterprise-attack.json" Could it be that it can’t find the file or it's a permissions issue? A few things to check: Verify Permissions (User/Role) access to... See more...
From the error "Error occurred reading enterprise-attack.json" Could it be that it can’t find the file or it's a permissions issue? A few things to check: Verify Permissions (User/Role) access to the security essentials app. Verify if it was installed correctly with correct permissions (via Gui or copy to /opt/splunk/etc/apps/ folder with correct splunk OS level permissions, assumiing this was linux based) Uninstall and re-install.   See how that goes first.
There are several ways to send data to HEC and not all of them use that format.  The raw endpoint should accept events in your desired format.  See https://docs.splunk.com/Documentation/Splunk/8.0.5/... See more...
There are several ways to send data to HEC and not all of them use that format.  The raw endpoint should accept events in your desired format.  See https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/FormateventsforHTTPEventCollector#Format_events_for_HTTP_Event_Collector
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).  This relates to the Max Lines configuration within ... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).  This relates to the Max Lines configuration within the Format segment of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never found the need to expand concise search results to read all lines. However, in recent weeks, perhaps following an upgrade of the Splunk Search heads, I've noticed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option resets to 5. As a result, I consistently have to adjust it after nearly every search, which has become cumbersome. Therefore, kindly provide guidance on changing the default value from 5 to 20 in the Search and Reporting App on Adhoc & ES Search heads. This adjustment would ease the inconvenience experienced by numerous customers and end-users who currently find it troublesome to customize it for each search.   The file is ui-prefs.conf, so I've filed a case with support to address this issue. Unfortunately, support wasn't able to make the necessary changes at the backend and suggested that I create a custom app and deploy it in the app upload section. Consequently, I created a custom app, deployed it, and it successfully passed the vetting process. Afterward, I restarted the Search head, but the changes didn't take effect. Upon reaching out to support again, they were unable to provide a solution for the issue. Therefore, I require assistance in resolving this matter. So refer the screenshot of the app which I have deployed for reference. Created a app as below: MaxLines_Values folder. Inside MaxLines_Value folder there would be default and metadata folder as mentioned in screenshot. So kindly help on the same.  
Essentially if the only change is OS it should be fairly easy to migrate. Ensure the new systems have the same IP or Hostnames depending on whether you use names or IPs in the configs. Ensure the spl... See more...
Essentially if the only change is OS it should be fairly easy to migrate. Ensure the new systems have the same IP or Hostnames depending on whether you use names or IPs in the configs. Ensure the splunk user and group are created on the new servers, follow instructions for installing by tar file. Stop the current servers, tar up the /opt/splunk folder and any data store folder. Then untar them onto the new boxes. 
@gcusello  index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | rex field=event "^(?<event_type>\w+)" | where event_type="INFO" | spath input_... See more...
@gcusello  index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | rex field=event "^(?<event_type>\w+)" | where event_type="INFO" | spath input_field=event event field contains the above log data
Hi @kranthimutyala2, which sourcetype are you using? did you tried json or _json? In this case the INDEXED_EXTRACTIONS=json is enabled Ciao. Giuseppe
@gcusello tried but it didnt work
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I ha... See more...
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I have 2 [token_name] stanzas configured working fine but now I have the need to use a different server certificate for one stanza. So I'd like to do something like this:   [http://stanza1] token = token1 index = index1 sourcetype = sourcetype1 [http://stanza2] token = token2 index = index2 sourcetype = sourcetype2 serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate_2.cer     I'm not sure it is possible though since in the doc it is written that per-token settings are only these: connection_host disabled index indexes persistentQueueSize source queueSize sourcetype token   Any hint?   Thanks, Marta