All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No I use Linux ubuntu wsl on windows, I can set permissions correctly but the problem is with 644 permissions, I cant run slim package app-folder
I'm guessing you're packaging the app on a Windows machine.  That will never work because Windows can't/won't set the file permissions correctly.  When I used to package on Windows, I would transfer ... See more...
I'm guessing you're packaging the app on a Windows machine.  That will never work because Windows can't/won't set the file permissions correctly.  When I used to package on Windows, I would transfer the .tgz file to a Linux system, explode it, change the permissions, then re-tar it and transfer back to the Windows machine for uploading.
Hi @mshakeb, having an Indexer Cluster, the best solution is adding three new Indexers to the old CM using RF=3 and SF=3, in this way, after some time) in the new three Indexers you will have a comp... See more...
Hi @mshakeb, having an Indexer Cluster, the best solution is adding three new Indexers to the old CM using RF=3 and SF=3, in this way, after some time) in the new three Indexers you will have a complete set of data. When data will be replicated in the new indexers, remove, one by one the three old Indexers, then change RF and SR as original. At least replace the CM following the documentation. Plan with much attention these activities! Ciao. Giuseppe
In clustered Splunk folder names in thaweddb folder should match db_<newest_time><oldest_time>_<bucketid>_<guid> naming convention. Also you can restore data from another indexer, just change the GUI... See more...
In clustered Splunk folder names in thaweddb folder should match db_<newest_time><oldest_time>_<bucketid>_<guid> naming convention. Also you can restore data from another indexer, just change the GUID to local (find in etc/instance.cfg). Please note that rb_ prefix should also be renamed to db_
Whenever I package the splunk app, I get execute permission error because I have 744 permission for conf files but splunk expects it to be 644. With 644 permission I cannot package the app, is t... See more...
Whenever I package the splunk app, I get execute permission error because I have 744 permission for conf files but splunk expects it to be 644. With 644 permission I cannot package the app, is there any workaround for the same. Below is the screenshot of the error.  
I have a PowerShell script that needs to be ran as admin to be able to load in all of the data. It returns a .csv file that exports to the lookups folder so that we can pull out the data and use said... See more...
I have a PowerShell script that needs to be ran as admin to be able to load in all of the data. It returns a .csv file that exports to the lookups folder so that we can pull out the data and use said data. I have the script in the correct directory in the Splunk server and can see it and I can run it but I'm not getting data out of it which is making me think that the script is not being ran as an admin. I've tried a few things but can't get it to work correctly. I've come to a couple of different options for what to do here. 1. Make a managed service account that runs the script as an admin. 2. Try to configure splunkd to allow running as admin (if possible?) 3. Other recommendations? I'm relatively new to Splunk. Just trying to learn all I can and I appreciate any pointers/guidance. 
I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish... See more...
I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish a connection.
pls whats the better way to create a search query for identifying knowledge object from inactive users and cleaning it up.
I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish... See more...
I tried to set up the Node.js agent in AppDynamics, and it seems that I've configured everything correctly. However, I'm encountering a connectivity issue—the AppDynamics agent is unable to establish a connection. Step 1: I downloaded the Express Cart Node.js application from GitHub. Step 2: After navigating to my Node.js project directory in the terminal, I executed the following commands: npm install npm fund npm audit npm audit fix npm app.js Step 3: I configured the agent within the app.js file, providing the necessary details such as access key, account name, tier name, and node name. Despite ensuring that all configurations are accurate, the agent still seems unable to establish a connection. Any insights on what might be causing this connectivity issue would be greatly appreciated!
we are looking to confirm with the "JAMF Integrations" that this app supports the Jamf Pro API vs Classic API and that it was configured to use the API Roles and Clients with the Access Token, Client... See more...
we are looking to confirm with the "JAMF Integrations" that this app supports the Jamf Pro API vs Classic API and that it was configured to use the API Roles and Clients with the Access Token, Client ID and Client Secret vs Basic Auth
Thanks for the response. we do not want downtime, please find the below steps on  Old Splunk indexers All the data is ingesting(storage path) in the default location  (/opt/splunk/var/lib/splunk)... See more...
Thanks for the response. we do not want downtime, please find the below steps on  Old Splunk indexers All the data is ingesting(storage path) in the default location  (/opt/splunk/var/lib/splunk) Has CM New Splunk Servers 1. Prepare 3 New indexers and a New CM 2. On New Indexers Storage path for Hot & warn data is /splunk_hot and /splunk_cold Plan for Migration from old to New (without down-time) Build a New Cluster Master Build 3 New Indexers with storage paths as (/splunk_hot and /Splunk_cold) Create the symbolic link on the old Indexers with the same Name New indexers storage path ((/splunk_hot and /Splunk_cold) Example : ln -s /opt/splunk/var/lib/splunk/…..    /splunk_hot (I am not sure here) Change the path in config in indexes.conf on old Cluster Master [volume_primary] #Path = /opt/splunk/var/lib/splunk  (this is old path and it is committed) Path = /splunk_hot [volume_cold] #Path = /opt/splunk/var/lib/splunk  (this is old path and it is committed) Path = /splunk_cold Push the bundle from the old CM. Join the New indexer server to the old CM. (This will sync the data) Wait till all the data is sync Move the Old CM config to New Cluster Master Shutdown the old CM Last step make the old indexers offline enforce count. I am Struck here I want to create a symbolic link on old indexers servers, how could I create and point the hot data to move in /splunk_hot  and colddb  to /splunk_cold I can see in the old indexers they are lots on index available (like windows,Linux,security,waf,firewall)
I am getting below error. Error in 'EvalCommand': Type checking failed. The '==' operator received different types.
@gcusello I dont see any difference, its not extracting anything
Hi @LizAndy123 , et me understand: you want to extract the user fields (that's located at the beginning of the event) and the resource to access (that's located at the end of the event). In  this ... See more...
Hi @LizAndy123 , et me understand: you want to extract the user fields (that's located at the beginning of the event) and the resource to access (that's located at the end of the event). In  this case you have to use two regexes: | rex "^(?<user>[^ ]+)" | rex "(?<resource>\w+)$" Ciao. Giuseppe
Hi @karthi2809, you can use % only using the like function otherwise you have to use *: | rename bucketFolder as BucketFolder | eval InterfaceName=case( BucketFolder=searchmatch("inbound") A... See more...
Hi @karthi2809, you can use % only using the like function otherwise you have to use *: | rename bucketFolder as BucketFolder | eval InterfaceName=case( BucketFolder=searchmatch("inbound") AND searchmatch("epm"), "EPM", BucketFolder=searchmatch("inbound") AND searchmatch("KPIs"), "APEX_File_Upload", BucketFolder=searchmatch("inbound") AND searchmatch("concur"), "ConcurFile_Upload", true(), "Unknown") | stats values(InterfaceName) AS InterfaceName min(timestamp) AS Timestamp values(BucketFolder) AS BucketFolder values(Status) AS Status BY correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId Ciao. Giuseppe
I have an Event where I can extract the 2 different ID's but how do I show that id 1 gave access to id 2? Sample event  User-ABCDEFG assigned Role-'NewRole' on Project-1234 to ABCDEFG I need to sa... See more...
I have an Event where I can extract the 2 different ID's but how do I show that id 1 gave access to id 2? Sample event  User-ABCDEFG assigned Role-'NewRole' on Project-1234 to ABCDEFG I need to say the User-ABCDEFG gave access to ABCDEFG in a stats sort of way - the user may give 4 or 5 accesses a day so I would then create a report which shows that that user did.
Hi All, I am using case statement to map values instead of other values. But i am not getting the values.I am getting UNknown values. BucketFolder values is like: inbound/concur   |rename bucke... See more...
Hi All, I am using case statement to map values instead of other values. But i am not getting the values.I am getting UNknown values. BucketFolder values is like: inbound/concur   |rename bucketFolder as BucketFolder| eval InterfaceName=case(BucketFolder="%inbound%epm%","EPM", BucketFolder="%inbound%KPIs%","APEX_File_Upload", BucketFolder="%inbound%concur%","ConcurFile_Upload ",true(),"Unknown")| stats values(InterfaceName) as InterfaceName min(timestamp) as Timestamp values(BucketFolder) as BucketFolder values(Status) as Status by correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId  
Hi @kranthimutyala2, what does it happen using only spath: index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | spath | rex field=event "^(?<even... See more...
Hi @kranthimutyala2, what does it happen using only spath: index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | spath | rex field=event "^(?<event_type>\w+)" | where event_type="INFO" Ciao. giuseppe
From the error "Error occurred reading enterprise-attack.json" Could it be that it can’t find the file or it's a permissions issue? A few things to check: Verify Permissions (User/Role) access to... See more...
From the error "Error occurred reading enterprise-attack.json" Could it be that it can’t find the file or it's a permissions issue? A few things to check: Verify Permissions (User/Role) access to the security essentials app. Verify if it was installed correctly with correct permissions (via Gui or copy to /opt/splunk/etc/apps/ folder with correct splunk OS level permissions, assumiing this was linux based) Uninstall and re-install.   See how that goes first.
There are several ways to send data to HEC and not all of them use that format.  The raw endpoint should accept events in your desired format.  See https://docs.splunk.com/Documentation/Splunk/8.0.5/... See more...
There are several ways to send data to HEC and not all of them use that format.  The raw endpoint should accept events in your desired format.  See https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/FormateventsforHTTPEventCollector#Format_events_for_HTTP_Event_Collector